Analytics & Data Arts & Culture

The Coming Age of Creative AI: From Roboadvisors to Roboartists


Hallucinatory image created with Dreamscope app by the author. Google engineers discovered this technique by reversing image recognition neural networks trained on dogs.

Something went very wrong with one of Google’s neural networks. It was designed for a simple task: identify dogs in photos. But a curious developer reversed the algorithm and it began to hallucinate dogs where there were none before. The psychedelic images resembled those of Salvador Dali, and echoed across the internet with the short hand “Deep Dream”.

Within a few months of this discovery, an academic paper repeated the same magic feat for famous painters. Data scientists built a set of robo-artists out of digital neuron clusters called recurrent neural networks. They used machine learning and artificial intelligence to reverse engineer visual art resembling Picasso’s dancing lines, Van Gogh’s hypnotic brush strokes and Edvard Munch’s emotional impact. We have taught robots how to make art by teaching them what makes an artistic style. And so “Deep Style” was born.

1.Style transfer illustration from “A Neural Algorithm of Artistic Style” by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Source:

Style transfer illustration from “A Neural Algorithm of Artistic Style” by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Source:

What should we make of our creative automatons? In human affairs, children of successful lawyers and accountants often have the freedom to become creators, liberated from monetary constraints and able to dance, paint and make music. This time, our software progeny are transcending their humble beginnings. They just might become humanity’s greatest artists, amplifying and robotizing creativity.

The computer revolution has catalyzed tremendous automation, in physical labor in places like factories, and increasingly now in intellectual labor, from legal discovery to roboadvisors. As the Marc Andreessen saying goes, “Software is eating the world”. A recent McKinsey study projects that 45% of all office work will be automated in the near future. Software processes our paperwork, searches for results, takes payments, directs cars, and talks with other systems to create lattices of efficiency. But our programs to date have been deeply analytical, following prescribed top-down rules to implement productivity tasks.

That left-brained set of rigid algorithms is about to meet its right-brained counterpart. The key is that this new sort of software isn’t replicating a set of rules to distort an image per human design. Rather, it is using sophisticated math to process visual information, extract unique patterns, and recursively learn what makes any particular artistic style unique. Then it can take off from there. Think of it as statistical intuition, not unlike our own instincts and gut impulses. Mobile apps like Dreamscope (free, amazing, on iOS/Android) allow a user to apply this machine-learned creativity to a photo on command. Dreamscope has indexed dozens of creative algorithms—a robot for each painter—and enables a user to “seed” their own machine artist. How long until every creative human endeavor has been patterned in this way?

Style transfer technology as deployed on mobile devices on Dreamscope. Original image source by Christopher Michel.

Style transfer technology as deployed on mobile devices on Dreamscope. Original image source by Christopher Michel.

Already, we find machine learning applications in the visual arts, music and writing. The programs are young and often spit out creations that seem somehow wrong, though we cannot put a finger on why. These machine artbots are from the wrong side of the Uncanny Valley – a category of things that attempt to mimic humanity but in their artifice create unease.

And yet, we have never been closer to a room of monkeys typing out the collected works of Shakespeare. Just ask a robot that has ingested all of Shakespeare’s works and is trained to generate soulful prose on command, ad infinitum. Or turn on machine-Bach, mathematically generating emotional sound vibrations that, some day, may be indistinguishable from the real thing. The below texts are neural network generated samples based on Shakespeare, which can be created ad infinitum. Source: Andrej Karpathy



One loyal of my love, the wedding-body touchest thee: I pray,

Henceforwards, and submiss the truth! though my throne

Lives as mock’d my pardon with some untold

Attore sack lop and shrum’ them up:

But be preserved with spirits, so brimfibed again!

My voices were so early, I was enough.



Then let him withdraw them debour to branch ere any any

day, but to prevail’d be penny of a merry tongue

Which the exploits of fools look with their veins.


Beware, artists. Automation will impact not only the analytical industries, but also those that require creativity, originality and intuition—domains that were once believed to be uniquely human. If you are an artist, musician, or writer, artificial intelligence is about to present challenges and opportunities that rivaled the ones posed to painters by the invention of photography in the 1800s. What now seems like a crude hollow reproduction of a mystical human endeavor could eventually be responsible for the bulk of all art, initiated by humans but outsourced to machines.

There are many objections to the idea that true art can even be made by software. Isn’t the human always the root of the process? Isn’t the artist’s impulse to create profoundly human? Isn’t the point of art to in some way symbolize and instantiate the unique point of view of the human artist in order to evoke a uniquely human response in the viewer or listener? Aren’t our cultural values—a result of the arbitrary and arduous evolution of a mammalian body—the only lens capable of authoring and appreciating art, as such? So what will be the message or set of values implicit in machine-generated art? These questions are fair, but in my opinion only partially relevant.

As the shift toward the machine continues, there will be increasingly less space for human execution of what qualified for creative endeavors in the past. Instead of composing music, we will create randomization algorithms that combine software-composers on the fly, reacting to our quantified moods and surroundings. Instead of learning to paint, aspiring artists will be better served to learn how to code programs that render creative outcomes in simulated virtual reality environments.

The raw materials for this revolution are in place. Wearable sensors will make it possible to create an essentially infinite data set of the images, sounds and text that humans exchange every day. Google Photos and other cognitive computing tools are processing millions of such inputs daily. Our culture can increasingly be mapped, studied and statistically modeled. Hard rules about aesthetics are not necessary when we can just point our learning machines to the recorded history of what humans believe is beautiful and meaningful. The Golden Ratio is timeless.

What will be the meaning of such “art”? Critics of the future will wrestle with such questions.

We can also simulate evolution and reward the most creative software with fitness and something resembling life itself. In 2013, engineers at Cornell Creative Machines Lab used evolutionary programming to create 3D cubes that learned how to walk: the randomized critters that ambled fastest were allowed digital offspring that moved faster with each generation.

Our robo-artist could be motivated by a different outcome – to move the human spirit – using the vast data generated by human activity both as inputs and to determine the success and impact of their new creations . Yes, humans will set many of the creative programs into motion, but the ultimate outcome will be the product of machines. We will be the builders, accountants, and lawyers—our digital children will dance, paint and sing.

Alexey (Lex) Sokolin (@lexsokolin) is an entrepreneur building the next generation of financial services technology at Vanare (@vanareplatform He previously founded roboadvisor NestEgg Wealth and holds a JD/MBA from Columbia University. Lex is also a digital media artist (, and is fascinated by recurrent neural networks and creative AI.

Tags: , ,

  • Catherino Dolo

    It seems important to clarify that the words intelligence, creativity and cognitive are being used metaphorically in computer science.

    AI generatively recycles and recombines massive amounts of data provided by humans, and this is what gives it an appearance similar to individual and collective intelligence. In a related way, the expanding use of audio sampling/looping in music provides a digital simile of physical instruments and musicians only because its database approximates, recycles and recombines the prior recordings of human musicians.

    In either instance, these similes of human intelligence and creativity are dependent on massive quantities of encoded, discrete, snippets of human content (and human generated algorithmic recombination patterns) for their approximation. Thus words like intelligence and creativity are being used metaphorically when characterizing AI as being anything more than an appearance/simulation of the qualities of consciousness and sentience.

    Current trends in technology and AI are replacing an increasing number of human activities. In this context, it seems important to distinguish the word ‘creativity’ in (at least) two senses: creative process (composer) versus generative recombination (compositor).

    Ultimately, it seems imperative to value and develop human creativity as a process to which new technology is designed to assist; as an area distinct from training the human creative process for the role of assisting (editing, arranging) a generative technological process.