Feared by many and distrusted by others, machine learning may yet turn out to be a creative’s best friend, by performing the low level tasks that take up their valuable time.
Ever since McCann Japan debuted robotic creative director AI-CD β in 2016, creatives have been left wondering how long they’ve got until steel-clad copywriters sweep through the boardrooms and studios of adland. Rather than being made redundant by the rise of the algorithms though, many will find themselves training their AI colleagues to do their jobs.
At the New York offices of digital production agency MediaMonks, creative technologist Sam Snider-Held is running towards, rather away from, such a vision of the future. His most recent project saw him train a neural network to design virtual landscapes. The algorithm ‘watched’ him work on a VR landscape, and then used his example to inform future design choices.
Snider-Held suggests these experiments could culminate in a ‘surgeon’s assistant’, an entity capable of predicting a creative’s choices and presenting them with the tools they need. He says: “I started using machine learning as a way to get what I wanted faster. I then started to think, what if I had a machine that knew what I am going to need or what I am going to do at any given point in time?”
He is fairly confident that machine learning will be used in the near future for a range of low-level creative work – augmenting, rather than usurping, designers. “I think a machine doing very basic visual tasks is something we’ll see quite soon.”
His experiments come at a time when some of the biggest creative companies on the planet are pursuing creative applications of AI. Magenta, an initiative from the Google Brain team, is aimed at using machine learning in music.
Applications include Nsynth, a synthesizer that can take existing sounds and merge them to create entirely new sounds that can be used by musicians. There’s also Onsets And Frames, an application that uses neural networks to predict patterns in sound, allowing it to automatically transcribe piano recordings.
Adobe’s work in this sphere includes its AI platform Sensei and automated image editor DeepFill. “I think you will see that sort of stuff in Adobe products in the next couple of years,” says Snider-Held.
Vijay Gupta, a retail strategy director at Adobe, says the company’s AI research aims to “enhance, not replace” creative work. “By automating the routine elements of this process, creatives can gain more time to work on original concepts,” he says.
Gupta points to Launch It, an app previewed at this year’s Adobe Summit that automatically tags web content. He explains: “This is just one way AI is helping to solve problems. Far from replacing and standardizing creative output, AI will remove the barriers that stand in the way of creativity.”
However, Snider-Held says new technology is always a double-edged sword. “It’s something we should be looking into on a very critical level.”
However, he says the ability to train machines could protect creatives from redundancy. He asks: “Are the machines going to get rid of the need for me, or can we use it to make me more productive?”
Gupta says the true aim of Adobe Sensei is ‘IA’ – intelligence amplification. He says: “Human intelligence and creativity will always be number one, but it can be amplified massively by AI.”
Snider-Held, who learned to create algorithms by watching YouTube videos, says the skills wielded by the next generation of creatives will soon enable them to use machine learning for day-today tasks. He concludes: “I think that as teenagers grow up using these tools to make content, it won’t be a ‘black box’ to them in the way it is to us. And maybe, along the way, they’ll find something totally crazy that will transform the way we do work.”
Credit: The Drum