Google builds neural network, makes it hallucinate

Using image-recognition software and a feedback loop Google researchers have produced art both beautiful and strange.

Neural net 'dreams'— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory.

Courtesy of Google and MIT Computer Science and Al Laboratory

June 19, 2015

What if if you could see the way Google thinks? Well, thanks to images just released, it is now possible to know the inner workings of the search giant's artificial neural networks. It turns out that Google's mind – if you can call it that – produces images that are both eerie and beautiful, sometimes both at the same time.  

Google researchers Alexander Mordvintsev, Christopher Olah, and Mike Tyka ran an experiment in which they used image recognition software not to identify images that it was familiar with, but to create them.

After "teaching" an artificial neural network to recognize certain objects, animals, and buildings, the researchers then threw the system a loop, literally. They gave the system an image that didn't have any of these things, and tasked it with identifying any feature that it recognized, and then to alter the image to emphasize that feature. Then they took the altered image and fed it back into the system to do it over and over.

"If a cloud looks a little bit like a bird," the researchers wrote, "the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere."

The result are images that are distorted beyond imagination. Well, human imagination that is: In the images, peacocks appear out of water, doglike animals float atop buildings, and buildings emerge out of mountains.

Image of a man on a horse is run though a filter taught to search for and emphasize animal-like shapes.
Courtesy of GoogleResearch

The level of distortion depended on the level which the software was commanded to repeat the search on its own output. The greater the number of times, the more distorted the impression. The researchers also found that the artificial intelligence network could create a photo out of “noise,” using networks trained by MIT Computer Science and Al Laboratory

The researchers began this project in order to better understand what occurred at each layer of the neural network. By feeding a photo into the system, and then asking the system to analyze it within the feedback loop, the team was able to invert the problem in order to understand the artificial thought process. The example that the research team presents in their blog is how to teach the neural network to identify a "banana." The team would begin with an image consisting only of "random noise." Then they would subsequently change the photo of noise in tiny increments, in order to create a photo that the neural network would identify as a "banana." 

Anyone who has ever seen the man on the moon or a face in a piece of toast might be able to empathize with Google's network. Known as pareidolia, the tendency to to generate familiar images and sounds out of random stimuli is universal among humans, and lies behind everything from emoticons to constellations.