Any novel idea can become an invention, but for a computer to truly be creative it has to innovate.
“Creativity consists of innovations,” says David Galenson, a University of Chicago economist who studies art markets and human creativity. “It changes the way people do things. The question is, will machines be capable of doing new things that are actually used?”
Professor Galenson says that creativity comes in two types: conceptual, which tends to be spontaneous, and experimental, which comes from years of practice. Orson Welles was only 25 when he directed his first feature film – “Citizen Kane,” now widely considered the best film ever made – but Alfred Hitchcock directed 68 films before finally achieving his magnum opus, “Vertigo,” at age 58.
Most AIs take the Hitchcockian approach to creative work. Because they can process information much faster than humans can, they can experiment with new combinations of data in a fraction of the time. This brute-force approach to creativity has already produced surprisingly human results.
Jack Hopkins, a former University of Cambridge researcher, has programmed a software that can be tuned to compose poetry in a specific rhythm with a specific theme. The system was trained on more than 7 million words of 20th-century English poetry, and some of its efforts have passed a Turing test – fooling readers into thinking they’re reading the words of a human.
His poetry bot is by no means the first to do that. In 2015, a computer-generated poem was accepted for publication in The Archive, a student-run literary journal at Duke University and one of the oldest literary magazines in the United States.
Meanwhile, website-building tools such as Firedrop and The Grid have employed AI assistants to simplify or even automate web design. In August, a program named Amper released “I AM AI,” the first music album composed and produced entirely by an artificial intelligence.
But conceptual innovation presents a deeper challenge.
Spontaneous bursts of creativity arise from the heuristic and sometimes nonsensical logic of the human thought. AIs are fundamentally data-driven. As a result, many of these programs provide “passable” solutions derived from common patterns, rather than entirely new creative works. Indeed, early adopters of AI-assisted design tools have complained of repetitive, “template-like” results. Amper’s debut single, “Break Free” is well-composed but ultimately forgettable.
While people can usually appreciate “goodness” on an intuitive level, a computer needs parameters to reach a conclusion. This poses a significant conceptual challenge to AI researchers: How does one articulate a nebulous concept like “good,” to a machine?
“Machine learning is good at generating and evaluating variations,” says Ranjitha Kumar, a computer science professor at the University of Illinois at Urbana-Champaign. “[But] you don’t really understand the problem definition, the constraints, or the criteria for goodness until you’ve built a bunch of things and tried them out. It’s hard to imagine an AI doing all that on its own anytime soon.”
But that future might not be so far off. In 2009, Canadian scientists developed a portrait-painting algorithm with an “automatic fitness function” to produce human-like artistic choices. Without prompting, the AI “rediscovered” certain techniques used by famous artists, such as using brush strokes to lead the viewer’s eye toward the eyes of the portrait’s subject.
“This is something that Rembrandt did but this was not ‘hand-coded’ into the computer program,” says co-author Liane Gabora, an assistant professor of psychology at the University of British Columbia-Okanagan. “It figured this out for itself.”