How to stop robots from taking over? We'll need an AI kill switch.

Scientists want to ensure artificial intelligence does not override human intention. 

Taiwanese electronics manufacturer ASUS's Zenbo home robot on display at the annual Computex computer exhibition in Taipei.

Tyrone Siu/Reuters

June 8, 2016

The old sci-fi trope of technology taking control away from human beings may one day present a real threat, prompting scientists today to develop a kill switch for artificial intelligence. Laurent Orseau, from Google DeepMind, and Stuart Armstrong, from the Future of Humanity Institute at the University of Oxford, have published a new paper on how future intelligent machines could be prevented from learning to override human input, ensuring that humans will always stay in charge of machines.

"It is sane to be concerned – but, currently, the state of our knowledge doesn't require us to be worried" Dr. Orseau told the BBC. "It is important to start working on AI [artificial intelligence] safety before any problem arises. AI safety is about making sure learning algorithms work the way we want them to work."

The pair's research focuses on reinforcement methods to be sure AI machines can be interrupted by the humans who manage them, without the machines themselves learning how to overcome or avert human intervention.

In Kentucky, the oldest Black independent library is still making history

"Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions," they write in their paper, "Safely Interruptible Agents." But if the learning agent learns "to avoid such interruptions, for example by disabling the red button, it is an undesirable outcome."

Essentially, the machine should not be able to disregard human attempts to stop or interrupt its functioning, since AIs "are unlikely to behave optimally all the time," the researchers acknowledge. In 2013, for example, an AI taught to play Tetris, learned to pause the game indefinitely to avoid having to lose the game. 

A kill switch is important, University of Sheffield AI expert Noel Sharkey told the BBC, but "what would be even better would be if an AI program could detect when it is going wrong and stop itself." Professor Sharkey points to Microsoft's Tay chatbot as an example of AI that could have used more self-monitoring, after the bot starting using racist and sexist language. "But that is a really enormous scientific challenge," Sharkey says.

AI advancements have worried a number of scientists, from physicist Stephen Hawking – who said its full development "could spell the end of the human race" – to entrepreneur and inventor Elon Musk, who has called it a potential threat.  

"The timing is right for [a kill switch] to be discussed as the architectures for A.I. and autonomous machines are being laid right now," Patrick Moorhead, an analyst with Moor Insights & Strategy, told Computerworld. "It would be like designing a car and only afterwards creating the ABS and braking system. The kill switch needs to be designed into the overall system."