Subscribe

Worried about amoral robots? Try reading them a story.

Georgia Tech researchers say that teaching artificial intelligence to understand human stories can instill human values and ethics in robots. 

  • close
    A man talks with a robot at the Global Robot Expo in Madrid, Spain, in this January 28, 2016 file photo.
    Francisco Seco/ AP/ File
    View Caption
  • About video ads
    View Caption
of

Why don't we trust robots? After decades, engineers and scientists have tinkered and programmed humanoid robots to be eerily like us. But emotions and ethics remain just beyond their reach, the basis of our fears that, when push comes to shove, artificial intelligence won't have our best interests at heart.

But storybooks might fix that, a Georgia Institute of Technology team says. 

"There is no user manual for being human," Dr. Mark O. Riedl and Dr. Brent Harrison, computer scientists at Georgia Tech, emphasize in their latest paper. Growing up, no one gives humans a comprehensive list of 'dos' and 'do-nots' to learn right from wrong; gradually, through examples and experience, most of people absorb their culture's general values, and then try to apply them to new situations.

Recommended: Despite sci-fi tropes, robots make better managers, study says

Learning "unwritten rules" from a story is difficult for artificial intelligence (AI), which needs specific rules and steps. Many scientists say it's crucial that humans find a way to instill robots with a sense of right or wrong, so that their abilities can't be used against us. 

But robots rely on programming, and need their makers to specifically list out all the dos and do-nots. The Georgia Tech team, however, says they've found a way to teach robots a general understanding of what's OK and off-limits in human cultures, and value those "rules" more than simpler goals, like speed or power, that might hurt humans. 

Their research uses Scheherazade, an artificial intelligence program designed by Dr. Riedl, to produce original stories and then break them down, Choose Your Own Adventure-style, turning one basic plot into dozens of branching decisions and consequences. The stories are passed along to Quixote, another AI system, which assigns reward values to each potential decision: more "points" for choices that are align with human values, and likely to help people. 

To drive the lessons home, though, Quixote has to try out its new knowledge, walking through situations similar to the stories. It's rewarded for each "good" decision, and punished for each "bad one." 

If you sent a robot to buy milk, for example, it might decide that stealing the milk was the quickest way out of the store. Quixote, on the other hand, will learn that waiting in line, being polite, and paying for goods is actually the desired behavior. The rewards and punishments it receives help the AI "reverse engineer" the values of the culture. 

"Stories encode many types of sociocultural knowledge: commonly shared knowledge, social protocols, examples of proper and improper behavior, and strategies for coping with adversity," the authors write, especially "tacit knowledge": rules we feel like we know instinctively, but are difficult to explain.

There's still a long way to go before robots will really share our values, they say. The goal is to offer AI a more general value system, instead of specific rules for specific situations, but the Quixote system works best when the robot is tasked with jobs very similar to the stories, although they hope that could be expanded in future work.

There are other problems, too: robots just don't understand a lot of the subtlety and language used in "real" stories, unlike Scheherazade's basic ones, and sometimes human heroes do the "right" thing by breaking all the rules.

But as robots' abilities expand beyond specific tasks into general intelligence, it's critical that the values governing their behavior keep up, to help them understand not just what to do, but why. "This new, general intelligence may be equal to or greater than human-level intelligence but also may not understand the impact that its behaviors will have on humans," Riedl and Harrison write.

Quixote may not get it right all the time – but then again, neither do people.

About these ads
Sponsored Content by LockerDome
 
 
Make a Difference
Inspired? Here are some ways to make a difference on this issue.
FREE Newsletters
Get the Monitor stories you care about delivered to your inbox.
 

We want to hear, did we miss an angle we should have covered? Should we come back to this topic? Or just give us a rating for this story. We want to hear from you.

Loading...

Loading...

Loading...

Save for later

Save
Cancel

Saved ( of items)

This item has been saved to read later from any device.
Access saved items through your user name at the top of the page.

View Saved Items

OK

Failed to save

You reached the limit of 20 saved items.
Please visit following link to manage you saved items.

View Saved Items

OK

Failed to save

You have already saved this item.

View Saved Items

OK