Exploring the Blade Runner mood with Big Sleep and CLIP-GLaSS
A Colab Generative Adversarial Network augurs a new wave of preliminary style exploration for architects and designers
Recently I wrote a piece on architecture in AI1. As is often the case, the blizzard of research I had to do led me down an entertaining rabbit warren, and - as usual - I later found some things I might have included in the original piece.
Or not - the Big Sleep GAN Colab is temperamental, to say the least. It’s capable of generating semi-abstract images based on a prompt that you provide, with Natural Language Processing and Natural Language Understanding bridging the divide between text and image synthesis.
The GitHub repository2 powering Big Sleep can be downloaded and run directly, but the Google Colab notebook (it’s just a web-page with buttons that send commands to Google’s processing infrastructure) makes it extraordinarily easy to experiment with text-to-image AI synthesis.
Type in ‘A cathedral in the style of Blade Runner’, and (after a few misfires), Big Sleep begins to deliver:
The datasets underpinning Big Sleep have clearly taken in and digested the wealth of online resources3 about production art for the two Blade Runner movies. The images produced are sketchy and stylized in the manner of Syd Mead, Ron Cobb, and other artists that contributed to the franchise (if we can call it that after one belated sequel).
Big Sleep is more than willing to speculate. If you’re wondering what Joanna Cassidy’s snake-centered dance routine would have been like if it had been included in the 1982 Ridley Scott movie4, AI will have a go at recreating it:
Experiment with semantic context, though, because the NLP at play here can take things rather literally. If you put in ‘Zhora dances in sequins’, it will probably create a room full of sequins as an environmental context. Therefore, ‘wearing sequins’ is better. In either case, you’re going to have to kiss a lot of frogs before anything recognizable, coherent or pleasing emerges5.
I’ll include some of the other Blade Runner generations for fun at the end of this post.
OpenAI’s CLIP framework also powers another fascinating Google Colab outing: CLIP GLaSS. There’s a little more clicking to do before you get to prompt the AI, but not much. It’s quite good at designing moody bar interiors for a putative Blade Runner movie:
Unsurprisingly, CLIP GLaSS also iterates nicely through PKD-style urban street scenes:
If you’re ripping off one style, you’re Brian De Palma (Hitchcock). If you’re ripping off 10 genres, you’re Tarantino. I don’t think that GLaSS GAN has quite crossed over to having fully digested its influences — it finds a connection, starts to obsess about it and take the phone off the hook. For now.
Perhaps this monomania will make it relatively easy, at least for the next few years, to discern copyrighted content that has contributed to datasets. But as the GPU infrastructure grows past the current chip crisis, and as the latent space grows bigger and the connections become more abstruse, this kind of GAN-based exploration seems set to be a genuinely fascinating and useful stylistic mood-board for architects, interior designers and production designers, among others.
Martin Anderson is a freelance writer on AI and machine learning. He can be contacted at https://www.linkedin.com/in/martin-anderson-ai-writer/
In this and in other CLIP-based exploration, the source material is revealed in the output. The database has limited space and attention for any particular subject, and won’t have seen all possible source material for any particular topic. Therefore it favors the juxtaposition of whole elements that are each relatively faithful to the originals, such as ‘Zhora’ + ‘Bar’, rather than making small changes and fiddly changes to the base colors and shapes of the individual concepts that it ingested for the data set.