Worried about amoral robots? Try reading them a story.

Georgia Tech researchers say that teaching artificial intelligence to understand human stories can instill human values and ethics in robots. 

|
Francisco Seco/ AP/ File
A man talks with a robot at the Global Robot Expo in Madrid, Spain, in this January 28, 2016 file photo.

Why don't we trust robots? After decades, engineers and scientists have tinkered and programmed humanoid robots to be eerily like us. But emotions and ethics remain just beyond their reach, the basis of our fears that, when push comes to shove, artificial intelligence won't have our best interests at heart.

But storybooks might fix that, a Georgia Institute of Technology team says. 

"There is no user manual for being human," Dr. Mark O. Riedl and Dr. Brent Harrison, computer scientists at Georgia Tech, emphasize in their latest paper. Growing up, no one gives humans a comprehensive list of 'dos' and 'do-nots' to learn right from wrong; gradually, through examples and experience, most of people absorb their culture's general values, and then try to apply them to new situations.

Learning "unwritten rules" from a story is difficult for artificial intelligence (AI), which needs specific rules and steps. Many scientists say it's crucial that humans find a way to instill robots with a sense of right or wrong, so that their abilities can't be used against us. 

But robots rely on programming, and need their makers to specifically list out all the dos and do-nots. The Georgia Tech team, however, says they've found a way to teach robots a general understanding of what's OK and off-limits in human cultures, and value those "rules" more than simpler goals, like speed or power, that might hurt humans. 

Their research uses Scheherazade, an artificial intelligence program designed by Dr. Riedl, to produce original stories and then break them down, Choose Your Own Adventure-style, turning one basic plot into dozens of branching decisions and consequences. The stories are passed along to Quixote, another AI system, which assigns reward values to each potential decision: more "points" for choices that are align with human values, and likely to help people. 

To drive the lessons home, though, Quixote has to try out its new knowledge, walking through situations similar to the stories. It's rewarded for each "good" decision, and punished for each "bad one." 

If you sent a robot to buy milk, for example, it might decide that stealing the milk was the quickest way out of the store. Quixote, on the other hand, will learn that waiting in line, being polite, and paying for goods is actually the desired behavior. The rewards and punishments it receives help the AI "reverse engineer" the values of the culture. 

"Stories encode many types of sociocultural knowledge: commonly shared knowledge, social protocols, examples of proper and improper behavior, and strategies for coping with adversity," the authors write, especially "tacit knowledge": rules we feel like we know instinctively, but are difficult to explain.

There's still a long way to go before robots will really share our values, they say. The goal is to offer AI a more general value system, instead of specific rules for specific situations, but the Quixote system works best when the robot is tasked with jobs very similar to the stories, although they hope that could be expanded in future work.

There are other problems, too: robots just don't understand a lot of the subtlety and language used in "real" stories, unlike Scheherazade's basic ones, and sometimes human heroes do the "right" thing by breaking all the rules.

But as robots' abilities expand beyond specific tasks into general intelligence, it's critical that the values governing their behavior keep up, to help them understand not just what to do, but why. "This new, general intelligence may be equal to or greater than human-level intelligence but also may not understand the impact that its behaviors will have on humans," Riedl and Harrison write.

Quixote may not get it right all the time – but then again, neither do people.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Worried about amoral robots? Try reading them a story.
Read this article in
https://www.csmonitor.com/Technology/2016/0217/Worried-about-amoral-robots-Try-reading-them-a-story
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe