Worried about amoral robots? Try reading them a story.

Georgia Tech researchers say that teaching artificial intelligence to understand human stories can instill human values and ethics in robots. 

A man talks with a robot at the Global Robot Expo in Madrid, Spain, in this January 28, 2016 file photo.

Francisco Seco/ AP/ File

February 17, 2016

Why don't we trust robots? After decades, engineers and scientists have tinkered and programmed humanoid robots to be eerily like us. But emotions and ethics remain just beyond their reach, the basis of our fears that, when push comes to shove, artificial intelligence won't have our best interests at heart.

But storybooks might fix that, a Georgia Institute of Technology team says. 

"There is no user manual for being human," Dr. Mark O. Riedl and Dr. Brent Harrison, computer scientists at Georgia Tech, emphasize in their latest paper. Growing up, no one gives humans a comprehensive list of 'dos' and 'do-nots' to learn right from wrong; gradually, through examples and experience, most of people absorb their culture's general values, and then try to apply them to new situations.

In Kentucky, the oldest Black independent library is still making history

Learning "unwritten rules" from a story is difficult for artificial intelligence (AI), which needs specific rules and steps. Many scientists say it's crucial that humans find a way to instill robots with a sense of right or wrong, so that their abilities can't be used against us. 

But robots rely on programming, and need their makers to specifically list out all the dos and do-nots. The Georgia Tech team, however, says they've found a way to teach robots a general understanding of what's OK and off-limits in human cultures, and value those "rules" more than simpler goals, like speed or power, that might hurt humans. 

Their research uses Scheherazade, an artificial intelligence program designed by Dr. Riedl, to produce original stories and then break them down, Choose Your Own Adventure-style, turning one basic plot into dozens of branching decisions and consequences. The stories are passed along to Quixote, another AI system, which assigns reward values to each potential decision: more "points" for choices that are align with human values, and likely to help people. 

To drive the lessons home, though, Quixote has to try out its new knowledge, walking through situations similar to the stories. It's rewarded for each "good" decision, and punished for each "bad one." 

If you sent a robot to buy milk, for example, it might decide that stealing the milk was the quickest way out of the store. Quixote, on the other hand, will learn that waiting in line, being polite, and paying for goods is actually the desired behavior. The rewards and punishments it receives help the AI "reverse engineer" the values of the culture. 

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

"Stories encode many types of sociocultural knowledge: commonly shared knowledge, social protocols, examples of proper and improper behavior, and strategies for coping with adversity," the authors write, especially "tacit knowledge": rules we feel like we know instinctively, but are difficult to explain.

There's still a long way to go before robots will really share our values, they say. The goal is to offer AI a more general value system, instead of specific rules for specific situations, but the Quixote system works best when the robot is tasked with jobs very similar to the stories, although they hope that could be expanded in future work.

There are other problems, too: robots just don't understand a lot of the subtlety and language used in "real" stories, unlike Scheherazade's basic ones, and sometimes human heroes do the "right" thing by breaking all the rules.

But as robots' abilities expand beyond specific tasks into general intelligence, it's critical that the values governing their behavior keep up, to help them understand not just what to do, but why. "This new, general intelligence may be equal to or greater than human-level intelligence but also may not understand the impact that its behaviors will have on humans," Riedl and Harrison write.

Quixote may not get it right all the time – but then again, neither do people.