Simile is an audio-visual AI poetry generator, generated by a team of us.

In 2020 I was lucky to be collaborating with artist Glenn Davidson of Cardiff based Artstation.   Together with computing science researchers from the University of the West of England (UWE): Mark Palmer a philosophy trained Senior Lecturer in Computing Science and Creative Technologies, Sean Butler, a games industry veteran and Senior Lecturer in Games Technology, along with UWE student intern Matt Filer we explored the practice and potential of AI poetry generated in response to image recognition, in this case triggered by image upload.

Try it.


The prototype grew out of an idea that Glenn and I had to make an AI enhanced audio-visual postcard generator for digital heritage sites.  One of our inspirations was a playful and innovative mobile object recognition experience that I had co-produced for the Athenian contemporary heritage site, Technopolis, Primitive Objects as part of the 2019 EU Trust in Play urban game school.

Primitive Objects was created using an open source object recognition program for mobile.  Using this source application, Bronwin scripted novel text to speech chatbot responses to discrete object recognition events on site, creating a playful, and narrativized site-exploration experience.

With the assistance of Matt Filer, a computer programming student intern, we were interested to further develop this novel approach by potentially adding:

  • Binaural sound events
  • Augmented reality visuals, operating as clues to be discovered, as well as rewards to be enjoyed and shared
  • Spoken word elements voiced by an actor
  • And also incorporating spoken and/or text input from users
  • Narrativised user tracking, to create a gamified story trail as part of this experience

We aimed to produce a demo to gain useful design insights.

One of the first things that Matt Filer, the UWE computer programming intern did was to link together an existing haiku generation program, with another program that also linked visual object recognition with audio playback.  The flux of audio-visual-text created as a result provoked many follow on questions.

Many of which we had neither time,  nor funding to explore with Matt – as yet.

Nevertheless the numerous possibilities that presented themselves included:

  1. Enriching the naming of objects with background information, using archival research to include a range of curious and intriguing facts about the image being classified.
  2. Scripting with sound and image, as much as poetry
  3. Enabling participants to co-author poetic postcards by drawing upon a corpus of inspirational poets and writers like Keats, Wordsworth and others.

There are many wonders to be had by a collaboration with the archives…as well as notable omissions (such as the voices of the poor, the dispossessed, the prejudiced against who often don’t feature in the archives)…and those collaborations offer many potential variations in between.

One idea we toyed with was whether it would be possible to script character interactions woven in between these postcard productions.  Character interactions are an ideal way to link these sort of functionalities to either 1) Historic population movements, or 2) A specific project, which in this case happened to be an installation and event on Flatholm Island that Glenn has been masterminding inspired by its links to Marconi’s transmission of the first wireless signals over open sea from Flat Holm to Lavernock Point near Penarth, Wales. The intriguing result of this thought is that we added character elements.  Guided by suggestions from Glenn and Mark, Matt created GPS trigger zones, like plot points on a landscape.  As participants move through a landscape the poetry changes, so they can automatically compose and change poetry simply by walking through different locales.  This approach was inspired by our lack of access to the island during Covid-19 restrictions, but it also introduced numerous scripting opportunities for future works.   If you see curious references to various character identities (like the thoughtful Nurse in the Hospital referenced below) that is why they appear.  They are historic characters relevant to Glenn’s Flatholm Island project.

In the end the time required to properly test location based AI poetry (which we are nevertheless keen to do) and the size of the corpus required to train an AI meant that we couldn’t (as yet) easily and evocatively focus upon an interaction with just one poet.  The results too easily seemed like a poor copy of the original.  Instead, it became clear that we needed to mix it up a bit.

The breakthrough came when Matt started to generate his own machine learning poetry script and train it with a mix of sources, including the esteemed poet Philip Gross who has been commissioned to help write the Flatholm Island experience and even Dr. Seuss.

We are the french fries of human woe

Yes, the AI poetry generator did write that and this…

Foaming at the line to get out

and this…

phenomenon sky…my love should be the purple.

Philip Gross described the experience of seeing an AI composing poetry inspired by his own publications deeply uncanny.

AI poetry is chancy, sometimes good, sometimes more curious than uncanny (which perhaps, ultimately, is even more reflective of the quixotic and changeable nature of human creativity than we might imagine).  Changeable, fleeting, and occasionally brilliant…

At times I did wonder if there might be something to say about a playful gambling interface, like a revolving gun that allows participants to shoot the results, or not as a way to reload them….and wouldn’t it be great if this is how we could train the AI?  What if we sent links to poets the world over (we could even compare responsive themes across countries) and asked them to train this AI themselves, down the barrel of a smoking gun …what might they achieve together?  In the meantime we opted for a ‘regenerate poem’ button.


Do you want to try that again?