Saving humanity from killer robots starts today, say scientists

The wonders of artificial intelligence are being celebrated at this year's World Economic Forum meeting, but potential dangers are also being explored.

HUBO, a multifunctional walking humanoid robot performs a demonstration of its capacities next to its developer Oh Jun-Ho, Professor at the Korea Advanced Institute of Science and Technology (KAIST) during the annual meeting of the World Economic Forum (WEF) in Davos, Switzerland, January 20.

Ruben Sprich/Reuters

January 22, 2016

Fully autonomous weapons, or "killer robots," have come under scrutiny at the World Economic Forum in Davos, Switzerland.

It is the first time the annual meeting has considered the subject, and it was discussed amid a general flurry of interest in the world of artificial intelligence.

While there was a focus on many of the benefits human society can enjoy as the field of robotics advances, one hour-long panel session Thursday considered the darker side: “What if robots go to war?”

In Kentucky, the oldest Black independent library is still making history

The idea of rogue robots causing havoc is nothing new: science fiction has depicted such apocalyptic scenarios for decades.

But scientists, experts, and various organizations have in recent years begun to take the threat seriously.

“It’s not about destroying an industry or a whole field,” says Mary Wareham, coordinator of Campaign to Stop Killer Robots, in a phone interview with The Christian Science Monitor. “It’s about trying to ring-fence the dangerous technology."

This coalition of non-governmental organizations, launched in 2013, aims to “preemptively ban fully autonomous weapons,” defining these as “weapons systems that select targets and use force without further human intervention."

Renowned physicist Stephen Hawking was one of thousands of researchers, experts, and business leaders to sign an open letter in July 2015, which concludes:

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

“Starting a military AI [artificial intelligence] arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Yet there are those who see a preemptive ban as a missed opportunity, These technologies may offer the possibility of “reducing noncombatant casualties” in war, as Ronald Arkin, associate dean at the Georgia Institute of Technology in Atlanta, told the Monitor in June 2015.

He did however concede it made sense to have a moratorium on deploying such weapons “until we can show that we have exceeded human-level performance from an ethical perspective."

The panel in Davos included former UN disarmament chief Angela Kane and BAE Systems chair Sir Roger Carr, as well as an artificial intelligence expert and a robot ethics expert.

The chair of BAE Systems, a “global defence, aerospace and security company," described a $40 billion industry working on autonomous weapons in 40 countries.

Mr. Carr went on to say fully autonomous weapons would be “devoid of responsibility” and would have “no emotion or sense of mercy.” “If you remove ethics and judgement and morality from human endeavor whether it is in peace or war, you will take humanity to another level which is beyond our comprehension,” he warned.

So, how close are fully autonomous weapons to becoming a reality?

Back in 2012, some predicted it would take a couple of decades, Ms. Wareham tells the Monitor in her interview, but even since then, estimates have shrunk, as last year’s open letter describes:

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."