Saving humanity from killer robots starts today, say scientists

The wonders of artificial intelligence are being celebrated at this year's World Economic Forum meeting, but potential dangers are also being explored.

Ruben Sprich/Reuters
HUBO, a multifunctional walking humanoid robot performs a demonstration of its capacities next to its developer Oh Jun-Ho, Professor at the Korea Advanced Institute of Science and Technology (KAIST) during the annual meeting of the World Economic Forum (WEF) in Davos, Switzerland, January 20.

Fully autonomous weapons, or "killer robots," have come under scrutiny at the World Economic Forum in Davos, Switzerland.

It is the first time the annual meeting has considered the subject, and it was discussed amid a general flurry of interest in the world of artificial intelligence.

While there was a focus on many of the benefits human society can enjoy as the field of robotics advances, one hour-long panel session Thursday considered the darker side: “What if robots go to war?”

The idea of rogue robots causing havoc is nothing new: science fiction has depicted such apocalyptic scenarios for decades.

But scientists, experts, and various organizations have in recent years begun to take the threat seriously.

“It’s not about destroying an industry or a whole field,” says Mary Wareham, coordinator of Campaign to Stop Killer Robots, in a phone interview with The Christian Science Monitor. “It’s about trying to ring-fence the dangerous technology."

This coalition of non-governmental organizations, launched in 2013, aims to “preemptively ban fully autonomous weapons,” defining these as “weapons systems that select targets and use force without further human intervention."

Renowned physicist Stephen Hawking was one of thousands of researchers, experts, and business leaders to sign an open letter in July 2015, which concludes:

“Starting a military AI [artificial intelligence] arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Yet there are those who see a preemptive ban as a missed opportunity, These technologies may offer the possibility of “reducing noncombatant casualties” in war, as Ronald Arkin, associate dean at the Georgia Institute of Technology in Atlanta, told the Monitor in June 2015.

He did however concede it made sense to have a moratorium on deploying such weapons “until we can show that we have exceeded human-level performance from an ethical perspective."

The panel in Davos included former UN disarmament chief Angela Kane and BAE Systems chair Sir Roger Carr, as well as an artificial intelligence expert and a robot ethics expert.

The chair of BAE Systems, a “global defence, aerospace and security company," described a $40 billion industry working on autonomous weapons in 40 countries.

Mr. Carr went on to say fully autonomous weapons would be “devoid of responsibility” and would have “no emotion or sense of mercy.” “If you remove ethics and judgement and morality from human endeavor whether it is in peace or war, you will take humanity to another level which is beyond our comprehension,” he warned.

So, how close are fully autonomous weapons to becoming a reality?

Back in 2012, some predicted it would take a couple of decades, Ms. Wareham tells the Monitor in her interview, but even since then, estimates have shrunk, as last year’s open letter describes:

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.