Why do people trust robot rescuers more than humans?

As machines become more autonomous, scientists are trying to figure out how humans interact with them, and why, in some cases, they trust machines blindly, in spite of common sense.

|
Rob Felt, Georgia Tech
Georgia Tech researchers shown with their “Rescue Robot.” (L-R) GTRI research engineer Paul Robinette, GTRI senior research engineer Alan Wagner and School of Electrical and Computer Engineering professor Ayanna Howard.

Robot engineers at Georgia Tech found something very surprising in a recent experiment: People blindly trusted a robot to lead them out of a burning building, even if that robot led them in circles or broke down just a few minutes before the emergency.

“We thought that some people would probably trust the robot as a guide, but we didn’t expect 100 percent of people would,” Paul Robinette, a Georgia Tech research engineer who led the study, told The Christian Science Monitor in an interview.

Their findings are among a growing body of research into human-robot relationships that raises important questions about how much trust people should bestow on computers, especially critical at a time when self-driving cars and autonomous weapons systems are coming closer to reality.

“This overtrust gives preliminary evidence that robots interacting with humans in dangerous situations must either work perfectly at all times and in all situations, or clearly indicate when they are malfunctioning,” write the authors of a new paper to be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in Christchurch, New Zealand.

In the paper, Georgia Tech Research Institute engineers describe a study for which they recruited 42 mostly college-age test subjects who were told they were going to follow a robot into a conference room where they would read and be tested on their comprehension of an article. The research team told study participants that they also were testing the robot’s ability to guide people to a room.

The little bot, emblazoned with an unlit “Emergency Guide Robot” sign on its side, then led the study volunteers in circles, or into the wrong room. In some cases, the robot stopped moving altogether, with a researcher telling its human followers that the robot had broken down.

Once the subjects finally made it to the conference room, researchers closed the door and tried to simulate a fire by filling the hallway outside the room with artificial smoke, which set off an alarm.

When study participants opened the conference room door, they saw smoke and the robot, now with its emergency sign lit up and pointers positioned to direct traffic. The robot directed the subjects to an exit in the back of the building, instead of leading them toward a nearby doorway marked with exit signs. And they all followed.

“This is concerning” write the researchers, “because participants seem willing to believe in the stated purpose of the robot even after they have been shown that the robot makes mistakes in a related task,” they say.

Researchers could not understand why study subjects followed the robot that had just proven ineffective. Maybe, the paper's authors hypothesized, participants knew that they weren’t in any real danger. Or maybe, the younger university students who participated in the study are just more trusting of technology, or were following the robot to be polite, or thought they needed to follow it in order to complete the experiment.

“The only method we found to convince participants not to follow the robot in the emergency was to have the robot perform errors during the emergency,” write study authors.

But even then, some people still followed the machine in the wrong direction during the fake fire, in some cases toward a darkened room that was blocked by furniture instead of to an exit.

It is not clear why, says Dr. Robinette, so researchers next will attempt to find out what encourages people to trust robots unflinchingly in emergencies. Findings like these will help inform the development of artificial intelligence systems, from consumer gadgets to military anti-missile systems.

The US Air Force Office of Scientific Research, which partly funded this study, is particularly eager to understand “the human-machine trust process,” as the government wrote in a recent request for proposals to study the subject. The Air Force wants to make sure that humans don’t blindly trust robots in high-pressure combat situations, for instance, where people have deferred to machines to detrimental effects, reports Scientific American.

“People need to trust machines less,” Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in Britain told the magazine.

“One of the biggest problems with military personnel (or anyone) is automation biases of various kinds,” he said. “So for military purposes, we need a lot more research on how the human can stay in deliberative control (particularly of weapons) and not just fall into the trap of trusting machines.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Why do people trust robot rescuers more than humans?
Read this article in
https://www.csmonitor.com/Science/2016/0301/Why-do-people-trust-robot-rescuers-more-than-humans
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe