Lately, it seems that creatives everywhere have been doing double takes as AI-powered tools start to seep into mainstream media. With DALL-E creations hot on the heels of graphic designers and free copywriting sites like Jasper looming over busy marketing teams, now’s not the time to stick our heads in the sand. Instead, we want to find out whether this new wave of computer-controlled craft is really a cause for concern – or if we can make it work in our favour.
First things first: When talking about AI potentially replacing us creatives, it’s worth examining what creativity really means. Albert Einstein defined it as “seeing what others see and thinking what no one else ever thought.” Many, including a lot of us here at saintnicks, are in agreement, viewing creativity as inventiveness, as our inherent ability to use imagination to originate something new. In fact, the Cambridge English Dictionary’s definition of creativity is “the ability to produce or use original and unusual ideas.” This human ingenuity is difficult to replicate – and the reason why icons like Beethoven, Maya Angelou, Matisse, the Wright brothers, or Wes Anderson are so revered.
On the contrary, others (like Steve Jobs, for example) view creativity from a more practical point of view. Jobs said, “Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something.” That implies creativity is just a skill that can be learned and developed over time using reference points as inspiration. If humans, therefore, only build on what they have learned and what others have done in order to be creative, then it’s easy to argue that AI, too, can be creative. Because that’s essentially what AI does – it takes existing information (data) and, using clever algorithms, generates fresh, new content. But we’ll get to that a bit later.
In the defence of creatives, I believe there’s more to it. Sure, creativity is original, inventive, ingenious – maybe even learned. But it’s also intentional. It’s emotional. It’s contextual. As a copywriter, for example, I’m able to write with foresight and intuition. I know that an audience is likely to prefer one tagline over another, or laugh at a certain word, or be touched by a speech, simply because I share the same human experience as the people I’m talking to. I’m sentient. I consciously want my readers to feel something, I can intend for my words to elicit a response.
As humans, our thoughts, our memories, our physical sensations and the environments that surround us play huge, important parts in our lives. It’s our creativity that enables us to make connections between these things. When we create art – and I mean art in its loosest sense here, i.e. anything that’s an expression of creativity – we are either trying to discover something about ourselves, make sense of the world, affect our audience or express our thoughts and feelings. We have an innate human desire, an urge to create something meaningful.
A machine can’t do that. It doesn’t have the capacity for free thinking, nor does it have emotional intention. It can’t look at its audience and think, “I want my art to make you laugh or cry, I want to start a discussion around this topic, I want to comment on the state of the world.” Even the smartest AI can’t independently create art with meaning.
So, how can AI still be a threat to creatives if it can’t have an intention? Well, let’s look at the world of visual art for a moment.
Those who recently attended Glastonbury Festival may have crossed paths with Ai-Da, an artist who created portraits of the four headlining acts during a live painting demonstration. Although ‘live’ may not be the right word for it. You see, Ai-Da is a robot. The world’s first ultra-realistic artist robot, in fact. She uses cameras in her eyes, AI algorithms and a robotic arm to draw, paint, sculpt and perform poems. For years, she’s travelled the world, displaying her artwork in galleries, talking about her experience as a humanoid artist. You can even follow her on Instagram.
While, at first glance, Ai-Da could be mistaken for something from the year 3000, the AI she uses to create her art is quite simple. Allow me to get a bit technical here. You see, there are two different types of algorithms that can be used to create images through AI. The first one is Neural Style Transfer – where AI applies the style of one image to another. The Mona Lisa recreated in the style of Kandinsky. A photograph of an avocado re-styled as Warhol’s pop art. A pencil sketch turned into a Picasso. In order to function, the Neural Style Transfer needs both images as reference points to create its final product. This is what Ai-Da does, too. Using her ‘eyes’, she receives a reference image which she then replicates in her own, pre-programmed style. To really wrap your head around it, you can think of Neural Style Transfer as a fancy Instagram filter. Still with me?
Then there’s Generative Adversarial Networks – or GAN, for short. Unlike Neural Style Transfer, GANs can create original images from scratch. Well, sort of. GANs work by predicting an outcome based on a certain prompt. Using a set of data, they generate new examples that could plausibly fit in with the original data. So if the dataset is Van Gogh’s 900 paintings, the GAN would generate a new original image that looks like it could fit into a Van Gogh collection.
The results of GAN are pretty successful. So successful in fact, that, in 2018, Christie’s became the first auction house to offer a work of art created by an algorithm – which sold for a whopping $432,500. The artists behind Edmond de Belamy, as the artwork is called, are French collective Obvious. Using a dataset of 15,000 portraits from WikiArt, painted (by humans) between the 14th and 20th century, Obvious’ GAN created a new piece of art depicting a somewhat-blurry gentleman.
DALL-E is currently not available to the public – but the concept quickly took on a viral life of its own when Boris Dayma, a machine learning engineer, created the more accessible DALL-E mini (now called craiyon). Trained on much smaller amounts of data than DALL-E, craiyon’s machine learning improves day by day based on information inputted by its millions of users. For now, the resulting images are, at best, suited to meme culture – but as these technologies develop, it’s easy to see how they could become a part of everyday professional life. Print ads, book covers, blog headers, social posts, stock imagery, web content… the possibilities are endless. So where does that leave us?
I think the answer lies within the execution. All of these technologies, from DALL-E to Jasper, rely on prompts. They require us – the humans – to do the big thinking before they can switch on and start churning out their art. And it’s within the prompt that true creativity really lies. It’s not the machine that came up with the idea to have steampunk teddies go grocery shopping, it’s the person. The prompt satisfies both our aforementioned definitions of creativity – it requires imagination, and an ability to come up with something original, but it also requires a connection to be made, as Steve Jobs said. AI is the executioner, the maker, but we are the originators, looking at things differently, thinking up unimaginable things. To find the perfect image, you need to provide the perfect prompt. If AI can’t originate, then we creatives are still needed.
Now that we’re safe in the knowledge that AI, for the time being, isn’t going to come for our jobs entirely, we might even be able to look at how it can enhance our work and make us better. As OpenAI’s CEO Sam Altman described it in an interview with the New Yorker, AI can – and should – ultimately just be treated as “an extension of your own creativity.”
In agency life, a lot of time can be wasted during the original concepting phase when all you really want to do is spit-ball ideas and get your clients’ reaction. Tools like DALL-E can be a great help to you if you’re short on time but want to present a few visuals to illustrate an idea. Even if it’s just a word on a shop front or a puppy wearing a hat. It gives a lot more power to the “What if?” when suddenly that question can be answered in minutes, rather than having to mock it all out on photoshop for hours. Plus, you’ll never have to trudge through a stock image library ever again.
One of the most remarkable features of DALL-E is its ability to make edits to an image it has already created. Want to see what a flamingo would look like inside of the pool rather than next to it? Just tell DALL-E to move it around. Boom. Little tweaks that can take up annoying amounts of time can be executed with a few verbal prompts.
Writer’s block can be one of the most debilitating experiences for someone whose livelihood depends on how many words they can get down in an hour. AI tools like Copy.ai can act not only as a timesaver when deadlines are looming but also serve up inspiration when you’ve been staring at a blank page for far too long. Using a link, a couple of words or a simple description, Copy.ai can generate headlines for Facebook, brand mottos, meta descriptions and more. It even lets you rewrite existing text in a different tone. The output is never final-product worthy and definitely needs a human eye – and hand – to finish it off for a client, but it’s a great tool for getting that pesky first draft out of the way. Full disclosure: I actually used Copy.ai myself recently to come up with some alternatives for a Call to Action button – and it worked a treat.
So, there you have it. Whilst AI might come off as a bit of a scary, magical beast at first, it can actually serve as a handy little tool to keep our creative juices flowing. And no, I don’t think it will be replacing our creative team anytime soon. We’re far too much fun in the office.
To chat with our team or learn more about saintnicks, head to www.saintnicks.uk.com.
We bring out the best in brands. We take you further.