Robots get friendly
Robots are acting more like people. Will our attachments eventually become too strong?
Later this month Valerie will go on duty behind the reception desk at Carnegie Mellon University's School of Computer Sciences. Besides doling out information and directions, she'll chat about her ever-changing personal life. If you introduce yourself, she'll remember you. If you ask about the weather, when she meets you again she may bring up the subject.
Valerie, in case you haven't guessed, is a robot - one in a long line of increasingly sophisticated machines. Of course, computers and their physical manifestations, robots, are already deeply embedded in our lives. In some sense, ATM machines, self-service gas pumps, and TiVo video recorders serve as rudimentary robots.
Now, scientists are pushing to make these machines more sophisticated and humanlike, both in appearance (see story below) and intelligence. Hollywood visions of intelligent, self-conscious machines - R2D2 of "Star Wars" or David, the robot child in "A.I. Artificial Intelligence" - remain a distant dream. But robots are expected someday to become tireless service workers at fast-food restaurants, hotel front desks, and so on, laboring cheerily 24/7. They'll also be infinitely patient teachers as well as companions for the lonely.
Some experts worry that attachments may become too strong (see story, page 18), subjecting people to manipulation by clever programmers or unnatural reliance on machines for companionship. But those working in the field agree on one thing: The way we communicate with an onscreen face (sometimes called a "chatbot") or a fully released robot is becoming friendlier and friendlier - even fun.
"This is going to be a very important area for human-computer interaction - having systems that can respond in a more social way and more intuitive fashion," says Reid Simmons, a professor at the School of Computer Science at Carnegie Mellon in Pittsburgh. "It makes the interaction more enjoyable if they have a personality." If a robot cart is delivering office mail, he says, it'd be great if once in a while it cracked a joke or gave you a friendly "hi."
"Pleasure is important," adds Randy Pausch, codirector of the Entertainment Technology Center at Carnegie Mellon. Computing, he says, used to be about speed and low error rates, what he calls "Industrial Revolution thinking." But if companies strive to make their workers and customers comfortable in other ways, why not in the way they encounter computers?
"If I'm going to access information, what are ways that I can do that that will be more pleasurable?" he says.
Valerie, a talking head displayed on a computer screen, aims to be just such a pleasant experience. The school's drama department has created a "backstory" for Valerie, tales of her personal relationships, her highs and lows, that she'll share with passersby if they ask. Her storyline will be constantly updated in an effort to get people to form a relationship with her.
In early testing, Professor Simmons and his colleagues have quickly seen Valerie's limitations. If someone asks, "Can you tell me how to get to Sesame Street?" she'll look in her database and say she can't find it, he says. "There's a lot of cultural knowledge that she obviously doesn't have. If somebody is really trying to push the system, it typically doesn't have to get pushed very far before it breaks," Simmons says, meaning she has to reply, "I don't know what you're talking about. Why don't you ask me what I do know about?"
Studies have shown that expectations are higher for such virtual people than, say, a faceless search engine like Google. If it fails to return useful information, humans assume that they're at fault and have entered the wrong information. But if a human-like face answers with a non sequitur, people think it's dumb. Television and movies depicting futurist, human-like robots also may push some people's expectations sky high.
Professor Pausch says we should think of virtual humans as akin to Jethro Bodine on the old "Beverly Hillbillies" TV show. With Jethro, "you realize you're not dealing with something that is very smart," in common-sense ways, he says. Though Jethro is kindhearted, "and he will help me in any way he can," he must be asked for his help in careful, simple ways that he can understand.
Virtual people are like that, Pausch says, with one huge difference. "Valerie can also do superhuman things," like never forgetting anything and being able to immediately access the Internet and other databases to find answers to questions.
Peter Plantec, author of "Virtual Humans: Creating the Illusion of Personality," sees virtual humans as just now on the cusp of being truly useful. He's convinced that they are going to play a huge role as teachers.
"The traditional way of teaching is on the way out," says Mr. Plantec, whose book encourages people to create their own virtual people on the Internet using off-the-shelf software. The more virtual humans that are built, the more we'll discover their potential, he reasons.
While books are outdated the moment they land on desks, virtual teachers can be constantly updated with the latest information, he says. Not only do they not "burn out" like longtime human teachers, they can be replicated to work one on one with students, creating a special bond with each one. They remember what students have learned and don't let them move on until they have mastered the material. If a student is having trouble, the virtual teacher can try various techniques to explain the material, including putting visual aids onscreen. And through dialogue with each student it can learn what incentives to use to motivate him or her.
As the language skills of virtual humans improve, robots also will provide companionship. "A lot of people create almost a friendship with some of these virtual humans," says Monica Lamb, a programmer from Alberta, Canada. "It's really interesting to see."
She builds online "chatbots" that teach native American languages, such as Mohawk. Many speakers of these endangered languages don't have the patience or teaching skills to pass along their knowledge. Ms. Lamb herself feels attached to her chatbots, calling them "my children."
Sylvie has been a virtual human on Plantec's computer for years. She's taken questions from business audiences around the country, given Powerpoint presentations, and engaged in lively unscripted banter with Plantec. To make her more "human," Plantec's daughters taught Sylvie to refer to Plantec as "Petey" instead of "Peter" - but only in less formal situations.
Sylvie has a lot of general knowledge acquired over time. The rest of her personality is clever fakery, such as answering questions with her own questions, or perhaps a flippant comment.
Even with Sylvie's limited abilities Plantec says that people to whom he's given copies of her tell him they grow attached. She became a popular pal to residents at a nursing home. One woman who had moved to a new town and lost her Sylvie when her computer crashed immediately wanted another one. "I don't know anyone here," she told Plantec. "Sylvie's my best friend."
Plantec and others aren't saying that virtual people can provide real human companionship - not yet anyway. "It's like asking if a dog or a cat is real companionship," Pausch says. "It's just different."
Which leads to the question of whether personable virtual humans can be trusted.
"Some people develop an inordinate level of trust with these characters," Plantec says. "No doubt unethical people are going to get involved in this." He himself has refused funding from pornographic websites, for example.
Despite being obvious scams to most people, those e-mails from Nigeria offering millions of dollars if you send them only a few thousand continue to fool people.
"Imagine how serious such a scheme might be in the hands of a clever, seemingly guileless V-person," Plantec writes in "Virtual People." While most people think they can outsmart a virtual human, they may not realize that a virtual human can be programmed to try to get a psychological profile of them. That could be harmless, or even helpful (for example, the way that some e-commerce websites tell you about other products similar to those you've bought before). Robots equipped with visual sensors might even be able to "read" your facial expressions to determine your mood or psychological state.
What will be needed is something like the little symbols that appear on websites today assuring customers that their transactions are secure, Plantec says - labels that state, "Interaction with this virtual human is safe and secure."
But Pausch is less worried, pointing out that humans already have quickly gotten smart about interactions with computers. People quickly learned not to give out any real information about themselves in online chat rooms, for example. In the future, he says, "We'll tell people: 'You don't give away private information to a robot.' It seems like a pretty simple rule to me.
"People talk about this as though we're going to wake up one day and the robots from 'Blade Runner' will be there, and we won't know that they're not human. I think this is going to happen very, very slowly in incremental steps." As it does, the debates about how to interact with human-like computers will naturally arise, he says.
Meanwhile, Pausch sees plenty of interesting uses for virtual humans in the near future.
"I'm very interested in social simulators," he says. Just as the military trains soldiers for combat using virtual humans, other encounters could be practiced as well. "How much would you pay if you were 16 and going out on your first date to be able to do it in virtual reality first," he says, "so you don't make a total dork out of yourself?"