Subscribe

Artificial intelligence can learn Atari games from scratch, say scientists (+video)

Developers at DeepMind took cues from the human brain, creating a program that can master ‘Space Invaders’ all on its own.

  • close
    A child plays Ms. Pac-Man in New York. Computers already have bested human champions in Jeopardy! and chess, but artificial intelligence now has gone to master an entirely new level: Space Invaders. (FILE)
    AP Photo/Richard Drew
    View Caption
  • About video ads
    View Caption
of

Is a robot uprising coming in 2015?

Maybe – but only to show you up at the arcade.

Led by researchers Demis Hassabis and Volodymyr Mnih, Google-owned DeepMind Technologies has created an artificial intelligence capable of playing simple video games with minimal training. They described their breakthrough today in Nature.

Dubbed the deep Q-network agent (or DQN), DeepMind’s program can play a number of popular Atari 2600 titles, including ‘Pong,’ ‘Space Invaders,’ and ‘Breakout.’ According to the study, it is “the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.”

Video game-playing AI already exists, as any lonely gamer can tell you. In the absence of a real human opponent, most games allow players to challenge the “computer.” But in those games, the AI is endowed with a series of specific rules that guide its behavior. DQN, on the other hand, is given only one objective – maximize the score. From there, it “watches” the gameplay to learn new strategies in real time. Like the human brain, it learns from experience.

“It looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily,” Dr. Hassabis, who co-founded DeepMind, told BBC. “What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do. The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of gameplay what to do.”

Perhaps more impressively, DQN can take these strategies and apply them to games it hasn’t played before. In other words, when DQN gets better at one video game, it’s actually getting better at a whole host of games.

The program is far from perfect, however. While it rivals human players in action-oriented games, it struggles with more open-ended titles.

“Games where the system doesn't do well are ones that require long-term planning,” Dr. Mnih told NBC. “For instance, in Ms. Pac-Man, if you have to get to the other side of the maze you have to perform quite sophisticated pathfinding and avoid ghosts to get there.”

As DeepMind prepares DQN for ever more complex gameplay, an even greater potential waits on the horizon. Even more so than chess, video games can provide a model of the real world – one that requires intricate, adaptive decision-making. Researchers remain silent on exactly what real-world functions they have planned, but slyly noted that their program could someday drive a real car with “a few tweaks.” Does that mean DQN could go from Mario Kart champ to digital chauffeur? Only time will tell.

About these ads
Sponsored Content by LockerDome
 
 
Make a Difference
Inspired? Here are some ways to make a difference on this issue.
FREE Newsletters
Get the Monitor stories you care about delivered to your inbox.
 

We want to hear, did we miss an angle we should have covered? Should we come back to this topic? Or just give us a rating for this story. We want to hear from you.

Loading...

Loading...

Loading...

Save for later

Save
Cancel

Saved ( of items)

This item has been saved to read later from any device.
Access saved items through your user name at the top of the page.

View Saved Items

OK

Failed to save

You reached the limit of 20 saved items.
Please visit following link to manage you saved items.

View Saved Items

OK

Failed to save

You have already saved this item.

View Saved Items

OK