How AI drives creativity: three projects transforming reading, drawing, and photography

Artificial intelligence continues to be accompanied by catastrophic messages and doubts about how it will impact creative professions, beyond its uses in automatable tasks. Far from this disheartening view, we have compiled three projects that multiply inspiration and bring immersion to reading, transmute photography and poetry, or allow drawings to be transformed into sound compositions.

 

Read with an extra layer of imagination

Augmented Reading, developed by the National Library Board of Singapore in partnership with Snap and LePub Singapore, proposes precisely that: adding an enhanced atmosphere to the act of reading. The system uses Snap Spectacles prototypes with augmented reading glasses that activate audiovisual effects in real time as the person moves through the text. Singapore’s public library itself has explained that these prototypes will be tested in a selected library and that a version of the project will also be available to the developer community on Spectacles Lens Explorer.

The book ceases to be a closed object and becomes an interface capable of summoning sound, image, and rhythm. Rather than competing with reading, AI is enveloping it.

 

When the camera returns a poem

In Poetry Camera, a product of Kelin Carolyn Zhang and Ryan Mather, the logic is reversed in a particularly fruitful way. Instead of printing a photograph, the camera prints a poem about the captured scene.

Its first open version relied on accessible components such as the Raspberry Pi Zero 2 W, a camera module, and a thermal printer. According to Raspberry Pi, the system takes the image, extracts visual elements, and turns them into an AI-generated poem.

That small shift also changes the nature of the memory. The everyday stops being archived and starts being read. In a visual ecosystem saturated with captures, there is something almost refreshing about this renunciation of the photo as a final result.

 

Drawing to listen

Musical Canvas, by Google Arts & Culture Lab with artist-in-residence Simon Doury, takes that same idea to the field of drawing. The tool generates a soundtrack from a stroke on a digital canvas.

Google explains that the experiment uses Gemini to describe the drawing and Lyria to generate the music; in its initial presentation, it also noted that filters such as Pixelate or Old Film also modify the sound response.

Here, the drawing ceases to be just an image. It becomes the trigger for an atmosphere, an indirect score. Creativity is no longer confined to a single medium.

 

Viewed together, these three projects point to a suggestive idea: AI is not conceived as an autonomous author, but rather as a translation technology. It transforms text into atmosphere, scene into poem, and drawing into music. This hybrid condition is especially interesting for creative fields, where the isolated object matters less and less, and the experience composed of several layers of perception matters more, bringing new relationships among language, image, sound and space as a tool for exploration.