An artificial intelligence program, AlphaGo, faced off with European champion Fan Hui in the strategy game Go. The tournament lasted five games, with a final score of 5-0.
The computer won.
AlphaGo wasn't a perfect player, it did make mistakes, according to Mr. Hui. "This gives me confidence," he said in a video of the tournament. "But I lose all my games."
Defeating the human professional player was a feat long considered one of the greatest challenges of artificial intelligence. But it was how AlphaGo did it that was most astounding. In what the researchers call deep learning, the computer program honed its skills and taught itself new strategies.
Jonathan Schaeffer, an artificial intelligence researcher not associated with AlphaGo, calls this "a massive leap forward."
AlphaGo, devised by the Google DeepMind team, achieved "one of the longstanding grand challenges of AI," Demis Hassabis, CEO of Google DeepMind, the team that developed the program, said in a press teleconference.
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves," the researchers write in a paper published Wednesday in the journal Nature.
"Go is probably the most complex game ever devised by humans," Dr. Hassabis said.
Originating in ancient China, Go is a game of strategy. One player has black "stones" while the other has white. They take turns placing their stones on the 19x19 grid with the goal of taking up more territory on the board than their opponent.
Stones cannot be moved on the board, but a player can capture their opponent's stone by surrounding it with their own.
In any particular position in the game, there is an average of 200 possible moves, Hassabis said. That's compared to an average of 20 in chess. In fact, he added, there are more configurations of the board than atoms in the universe. "It takes a lifetime of study to master," Hassabis said.
"People thought it would take decades and decades before we could build Go programs that were as good as the best humans," Dr. Schaeffer tells The Christian Science Monitor in an interview.
Computer scientists have long been building artificial intelligence programs to tackle games such as checkers, chess, poker and Jeopardy!
"One by one, the games have been falling," Schaeffer, who led the team that built the program, Chinook, that defeated the top checkers players in the world, says. "Human supremacy has been replaced by computer supremacy. But the game that everybody has known for many, many years that was the hardest nut to crack was the game of Go."
So how did the Google DeepMind researchers finally crack it?
Other AI programs have approached games with a brute-force search method. To come up with a play, the computer comes up with all possible moves and then sorts through them to determine the best one. This process involves the program thinking ahead all the way to the end of the game, forming a search tree of possibilities, to determine if a particular move will help win.
But in Go, the possibilities are too vast for brute-force search. So the researchers needed a new method.
To reduce the vast search tree so the program doesn't have to sort through so many moves, AlphaGo combines tree search with two neural networks.
The two networks work in tandem. One, dubbed the policy network, narrows the search down so the algorithm only considers moves most likely to lead to a win. The other, the value network, evaluates if a move is stronger than the others. But, unlike with brute force search, the value network does not search through the whole game. It thinks ahead just far enough to determine the best move.
"This approach makes AlphaGo's search much more humanlike than previous approaches," study lead author David Silver said in the teleconference.
AlphaGo was put through a rigorous training regimen before being tested. First, it studied moves played by human experts. That phase lasted until the computer program could predict the human's move 57 percent of the time. Then, AlphaGo played millions of games against itself within its neural networks. That process helped it discover new strategies so it could beat the human experts.
Before AlphaGo faced Hui, the European Go champion, it played, and bested, the previous top Go artificial intelligence programs.
The mechanisms behind AlphaGo could lead to other technological advances. In the immediate future, it could help improve the intelligence of smartphone systems or recommendation programs, programs that learn about a user to provide a service. But down the line, this technology could aid in medical diagnostics.
"My dream is to use these types of general learning systems to help with science," Hassabis said. In his futuristic vision, artificial intelligence systems would work alongside human scientists to help expand what is possible in scientific endeavors.
The prospect of artificial intelligence has launched some ethical concerns, with some worried about robots turning violent. Google has even put together an ethics board to make sure their artificial intelligence projects won't threaten humanity.
Physicist Stephen Hawking and Tesla Motors CEO Elon Musk have been among those speaking out for regulation around artificial intelligence research. Musk, however, has invested in Google DeepMind to ensure the advances in artificial intelligence research are well thought out and to advance humanity, rather than endanger it.
At this point, it may be all about how the computer systems are handled, Schaeffer says.
"Computers are tools. We use them to improve the quality of our lives. The more intelligent we make these computers, the better opportunities there are for building tools to improve the quality of life."
"In a sense intelligence is the wrong word," he says. "Intelligence is a human attribute, but these are just computers. They're doing exactly what they've been programmed to do."
And AlphaGo has been programmed to win at Go.