Skip to: Content
Skip to: Site Navigation
Skip to: Search

New role for robot warriors

Drones are just part of a bid to automate combat. Can virtual ethics make machines decisionmakers?

By / Staff writer / February 17, 2010

Airmen roll out a Predator unmanned aircraft in Indian Springs, Nev. Such aircraft are tightly controlled by remote human operators. Some artificial-intelligence proponents believe next-generation robots could function more autonomously.

Tony Avelar/The Christian Science Monitor/File

Enlarge Photos

Science fiction sometimes depicts robot soldiers as killing machines without conscience or remorse. But at least one robotics expert today says that someday machines may make the best and most humane decisions on the battlefield.

Skip to next paragraph

Guided by virtual emotions, robots could not only make better decisions about their own actions but also act as ethical advisers to human soldiers or even as observers who report back on the battlefield conduct of humans and whether they followed international law.

As militaries around the world invest billions in robotic weapons, no fundamental barriers lie ahead to building machines that "can outperform human soldiers in the battlefield from an ethical perspective," says Ronald Arkin, associate dean at the School of Interactive Computing at Georgia Institute of Technology in Atlanta. The result would be a reduction in casualties both for soldiers and civilians, he says.

Dr. Arkin has begun work on an ethical system for robots based on the concept of "guilt." As a robot makes decisions, such as whether to fire its weapons and what type of weapon to use, it would constantly assess the results and learn. If the robot established that its weapons caused unnecessary damage or casualties, it would scale back its use of weapons in a future encounter. If the robot repeatedly used excessive force, it would shut down its weapons altogether – though it could continue to perform its other duties such as reconnaissance.

"That's what guilt does in people, too, at least in principle," Arkin says. "Guilty people change their behavior in response to their actions."

Though "Terminator"-style warriors will likely remain fictional long into the future, thousands of military robots are already operating on land, sea, and in the air, many of them capable of firing lethal weapons. They include missile-firing Predator and Reaper aircraft used by the American military in Iraq and Afghanistan, remotely controlled by human soldiers. Naval ships from several nations employ Phalanx gun systems (sometimes called "R2D2s" on American ships, referring to the robot from "Star Wars"), capable of shooting down incoming planes or missiles without command or targeting from a human.

South Korea has deployed armed robotic systems along its demilitarized zone with North Korea. The Israeli army patrols its borders with Gaza and Lebanon with roving unmanned ground vehicles.

Among systems being developed for the future by the US military are the Vulture, a pilotless helicopter that could stay aloft for up to 20 hours, and an unmanned ground combat vehicle.

But Arkin's sunny forecast for the future of ethical robot warriors has met with deep skepticism among some in his field.