How to stop robots from taking over? We'll need an AI kill switch.

Scientists want to ensure artificial intelligence does not override human intention. 

Tyrone Siu/Reuters
Taiwanese electronics manufacturer ASUS's Zenbo home robot on display at the annual Computex computer exhibition in Taipei.

The old sci-fi trope of technology taking control away from human beings may one day present a real threat, prompting scientists today to develop a kill switch for artificial intelligence. Laurent Orseau, from Google DeepMind, and Stuart Armstrong, from the Future of Humanity Institute at the University of Oxford, have published a new paper on how future intelligent machines could be prevented from learning to override human input, ensuring that humans will always stay in charge of machines.

"It is sane to be concerned – but, currently, the state of our knowledge doesn't require us to be worried" Dr. Orseau told the BBC. "It is important to start working on AI [artificial intelligence] safety before any problem arises. AI safety is about making sure learning algorithms work the way we want them to work."

The pair's research focuses on reinforcement methods to be sure AI machines can be interrupted by the humans who manage them, without the machines themselves learning how to overcome or avert human intervention.

"Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions," they write in their paper, "Safely Interruptible Agents." But if the learning agent learns "to avoid such interruptions, for example by disabling the red button, it is an undesirable outcome."

Essentially, the machine should not be able to disregard human attempts to stop or interrupt its functioning, since AIs "are unlikely to behave optimally all the time," the researchers acknowledge. In 2013, for example, an AI taught to play Tetris, learned to pause the game indefinitely to avoid having to lose the game. 

A kill switch is important, University of Sheffield AI expert Noel Sharkey told the BBC, but "what would be even better would be if an AI program could detect when it is going wrong and stop itself." Professor Sharkey points to Microsoft's Tay chatbot as an example of AI that could have used more self-monitoring, after the bot starting using racist and sexist language. "But that is a really enormous scientific challenge," Sharkey says.

AI advancements have worried a number of scientists, from physicist Stephen Hawking – who said its full development "could spell the end of the human race" – to entrepreneur and inventor Elon Musk, who has called it a potential threat.  

"The timing is right for [a kill switch] to be discussed as the architectures for A.I. and autonomous machines are being laid right now," Patrick Moorhead, an analyst with Moor Insights & Strategy, told Computerworld. "It would be like designing a car and only afterwards creating the ABS and braking system. The kill switch needs to be designed into the overall system."

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to