The brief was simple— use an AI bot to create some graphics for Lunar New Year. So to celebrate the Year of the Rabbit, a handful of us came up with the idea to pick our favourite pop culture icons born in the Year of the Rabbit and transform them into bunnies!
We weren't without our reservations. In this cell team, some of us had tried AI generators with underwhelming success, and some hadn't tried it at all. Wrestling with our own wariness of artificial intelligence and high design expectations, we set out to see where this would lead.
After checking out the outputs of a couple of AI generators, we settled on Midjourney. We surveyed the rest of the Chemistry team for their favourite icons, brainstormed keywords of how we wanted to depict them and went to town!
We realised that our prompt game needed to be on point to get the results we were hoping for. We reverse-engineered the visuals we envisioned for each icon into descriptions of what we wanted to see. It was mostly trial and error, seeing which prompts yielded better results. After a quick search online and studying others' prompts on Midjourney's Discord platform, we found some recommended keywords to include to enhance our results.
One of our icons, actor and comedian Mike Myers, is best known for his titular roles in Shrek and Austin Powers, but we wanted to depict him in his 1992 breakout role in Wayne's World.
While the initial input addressed the query and included the elements we wanted, the outputs were still missing the mark on consistency in likeness and recognisability. There were many unsuccessful results of generic-looking rabbits without any defining features.
We ultimately decided that as long as they were identifiable as their intended icons and within the realm of the brief, we'd take it. So we tweaked our prompts to add some visual context, like the setting and background.
The AI needed specificity in prompts but tended to overgeneralise the outputs when too many details were given. We found it was best to write clear and straightforward prompts — less is not more, more is meh, and just right is perfect.
For particular portraits, getting an accurate depiction was challenging. Despite many attempts and permutations of queries, Quentin Tarantino refused to turn into a rabbit, and along with Jet Li and Hillary Duff, their likenesses weren't quite right.
Still aesthetically pleasing, but perhaps with more diverse references, the AI could conjure more accurate likenesses of these individuals. Or perhaps this is a deliberate restriction by Midjourney to avoid deep fakes.
The AI model also seems to have issues understanding human anatomy, often depicting missing or disfigured appendages in the renderings — an unnatural amount of fingers on one hand or ears in the wrong places.
Ultimately, we still had to touch up our final selections to smooth over any awkward mutations or nonsensical details irrelevant to the subject matter.
"Even though most of the images we not completely accurate to what we had hoped for, I was blown away by the level of detail and fidelity of the generated visuals. If someone told me an artist painted it, I would not know the difference." — Ash, Experience Designer.
Stay tuned for part 2 where we sit down with the team to reflect on the experience and the implications of AI on the future of Creativity!