Dado Ruvic/Reuters/File
People are silhouetted as they pose with laptops in front of a screen projected with a Google logo, in this picture illustration taken Oct. 29, 2014, in Zenica, Bosnia and Herzegovina.

Google researchers build networks that invent their own encryption

Neural networks nicknamed 'Alice' and 'Bob' were taught to keep secrets from an adversarial network nicknamed 'Eve,' evolving their methods until their private messages remained secret.

Encryption software written by human programmers already protects sensitive data as it changes hands across a network, ensuring that only the intended recipient of any message can unlock it. But what if a network could write its own encryption software, inventing a security system to which no humans have a key?

Researchers with Google Brain, a "deep learning" initiative within the company best known for its search engine, published a paper last week documenting their ability to do just that.

By teaching two neural networks, nicknamed "Alice" and "Bob," to communicate with each other while keeping the contents of their messages secret from an adversarial third network, "Eve," the researchers effectively demonstrated that artificial intelligence (AI) can be unleashed as tireless tacticians in the never-ending struggle for data security. The approach, although still in its early stages, could revolutionize a broad array of scientific problem-solving.

"Computing with neural nets on this scale has only become possible in the last few years, so we really are at the beginning of what's possible," Joe Sturonas of encryption company PKWARE in Milwaukee, Wis., told New Scientist.

Google chairman Eric Schmidt said in 2014 that AI research has been building steadily since its conception in 1955, and he predicted last year that AI will take off in the near future, paving the way to breakthroughs in genomics, energy, climate science, and other areas, as The Christian Science Monitor reported.

In the meantime, researchers are playing games. More specifically, they are building machines that learn by playing games.

Earlier this year, a computer running a program developed by Google outmaneuvered top-ranked human player Le Se-dol in the ancient board game Go. The triumph showed a significant progression beyond computerized chess, as the Monitor's correspondent Jeff Ward-Bailey reported in March:

When IBM’s Deep Blue supercomputer defeated world chess champion Garry Kasparov in 1997, it did so more or less through brute force. The computer could evaluate 200 million chess positions per second, mapping out the most likely path to checkmate by peering many moves into the future. Human players simply can’t compute chess positions that quickly or thoroughly. But a chessboard is eight squares by eight squares while a Go board is 19 squares by 19 squares, which means it’s simply not feasible for a computer to evaluate all possible moves the way it would in a game of chess or checkers. Instead, it must use intuition to learn from past matches and predict optimal moves. 

Many researchers thought that artificial intelligence wouldn’t be able to develop those kinds of strategies until some time in the 2020’s. But AlphaGo relies on machine learning and Google’s "neural network" computers to be able to analyze millions of games of Go, including many it has played against itself.

Instead of relying on rules provided by human developers, neural networks sift through large amounts of data looking for patterns and relationships to inform future computations. In the most recent case, Google researchers told Alice to send Bob a 16-digit encrypted message composed of ones and zeroes. The two began with a shared key, but their encryption methodology evolved. 

Eve effectively decrypted the first 7,000 messages, but she quickly faltered thereafter, thwarted by the constantly changing tactics employed by the other two.

"We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals," researchers Martín Abadi and David G. Andersen wrote in their paper published last week.

"While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis," the researchers added.

John Biggs, writing for TechCrunch, noted that the researchers demonstrated how computers might be able to keep secrets not only from each other but from humans as well.

"This means robots will be able to talk to each other in ways that we – or other robots – won’t be able to crack. I, for one, welcome our robotic cryptographic overlords," he quipped.

But that very secrecy may highlight a limitation of the study for real-world uses, others noted.

"Because of the way the machine learning works, even the researchers don't know what kind of encryption method Alice devised, so it won't be very useful in any practical applications," Andrew Dalton wrote for Engadget. "In the end, it's an interesting exercise, but we don't have to worry about the machines talking behind our backs just yet."

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to

QR Code to Google researchers build networks that invent their own encryption
Read this article in
QR Code to Subscription page
Start your subscription today