In a future full of robots, where do humans fit in?

As robots appear more in daily life, what jobs should be performed by humans and who should be responsible when robots go awry?

|
AP Photo/Alastair Grant
A technician holds the hand of Rob's Open Source Android (ROSAL) which was built in France from 2010-2016, during a press preview for the Robot exhibition held at the Science Museum in London, Tuesday, Feb. 7, 2017. The exhibition which shows 500 years of mechanical and robotic advances is open to the public form Feb. 8 through to Sept. 3.

Someday soon, you will ask a robot to fetch a slice of pizza from your refrigerator. On that day, you’ll trust that the robot won’t tear through your walls and rip the fridge door off its hinges to get at your leftovers.

Getting robots to do the things humans do in the ways that humans do them (or better) without human intervention is an immensely wicked problem of autonomy. With as many as half of American jobs at risk of automation according to one study, and with an expected 10 million self driving cars on the road by 2020, robots are going to be everywhere, forever, and they won’t go away.

The enormous scope and scale of how autonomous robots will begin changing our lives requires the public and technologists alike to consider the challenges of autonomy.

Where will we allow robots to intercede into our lives? How do we make ethical judgments about the behavior of robots? What kind of partnerships will we develop with them?

These are big questions. And one key challenge at the core of many of them is, in roboticist-talk, what it means to establish “meaningful human control,” or sufficient oversight over an autonomous agent. 

To get a grip on our autonomous future, we’ll need to figure out what constitutes “enough” oversight of a machine imbued with incredible intelligence.

Today, most robots are made to accomplish a very specific set of tasks within a very specific set of parameters, such as geographic or time limitations, that are tied to the circuits of the machine itself.

“We’re not at the stage where robots can do everything that humans can do,” says Dr. Spring Berman, assistant professor of mechanical and aerospace engineering at Arizona State University. “They could be multi-functional but they’re limited by their hardware.”

Thus, they need a human hand to help direct them toward a specific goal, in a futuristic version of ancient dog and human partnerships, says Dr. Nancy Cooke, a professor of human systems engineering at ASU, who studies human-machine teaming.

Before dogs can lead search and rescue teams to buried skiers or sniff out bombs, they require an immense amount of training and “on-leash” time, Cooke says, and the same level of training is necessary for robots, though that training is usually programmed and based on multiple tests as opposed to the robot actually “learning.”

Even after rigorous “training” and vetting against a variety of distractions and difficulties, sometimes robots still do things they aren’t supposed to do because of quirks buried in their programming. In those cases, someone needs to be held accountable if the robot goes outside of its boundaries.

“It can’t be some patsy sitting in a cubicle somewhere pushing a button,” says Dr. Heather Roff, a research scientist at ASU’s Global Security Initiative and senior research fellow at Oxford University. “That’s not meaningful.”

Based on her work with autonomous weapons systems, Dr. Roff says she is also wary of the sentiment that there will always be a human around.

“A machine is not a morally responsible agent,” she says, “a human has to have a pretty good idea of what he’s asking the system to do, and the human has to be accountable.”

The allure of technology resolving problems difficult for humans, like identifying enemy combatants, is immense.

Yet technological solutions require us to reflect deeply on the system being deployed: How is the combatant being identified? By skin tone, or gender or age or the presence or absence of certain clothing? What happens when a domestic police force deploys a robot equipped with this software? Ultimately, whose finger is on the trigger?

Many of the ethics questions in robotics boil down to how the technology could be used by someone else in the future, and how much decision-making power you give to a robot, says Berman.

“I think it’s really important that a moral agent is the solely responsible person [for a robot],” says Roff. “Humans justify bad actions all the time even without robots. We can’t create a situation where someone can shirk their moral responsibilities.”

And we can’t allow robots to make decisions without asking why we want robots to make those decisions in the first place. Answering those questions allows us to understand and implement meaningful human control.

Complexity

Complexity, a partnership between The Christian Science Monitor and the Santa Fe Institute, seeks to illuminate the rules governing dynamic systems, from electrons to ecosystems to economies and beyond. An intensely multidisciplinary approach, complexity science seeks out the common processes that pervade seemingly disparate phenomena, always with an eye toward solving humanity's most intractable problems.

This initiative is generously supported by

  • Arizona State University
You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to In a future full of robots, where do humans fit in?
Read this article in
https://www.csmonitor.com/Science/Complexity/2017/0208/In-a-future-full-of-robots-where-do-humans-fit-in
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe