Skip to: Content
Skip to: Site Navigation
Skip to: Search


What if your laptop knew how you felt?

Researchers train computers to 'read' emotions, which could help with teaching, security, people with autism – and cranky users.

By Cristian LupsaContributor to The Christian Science Monitor / December 18, 2006



Not even Dan Brown and his Da Vinci codebreakers dared tackle the mystery of Mona Lisa's smile. But Nicu Sebe, a computer vision researcher at the University of Amsterdam, Netherlands, did. He processed the enigmatic portrait with his "emotion recognition" software and – eureka! – Mona Lisa was happy (83 percent) and slightly disgusted (9 percent).

Skip to next paragraph

Mr. Sebe valiantly pursued other mysteries. He decoded the image of Che Guevara that adorns T-shirts worldwide and proclaimed that El Comandante was mostly sad. And the fellow in Edward Munch's "The Scream"? He's much less frightened than surprised, Sebe declares.

Faces reveal emotions, and researchers in fields as disparate as psychology, computer science, and engineering are joining forces under the umbrella of "affective computing" to teach machines to read expressions. If they succeed, your computer may one day "read" your mood and play along. Machines equipped with emotional skills could also be used in teaching, robotics, gaming, sales, security, law enforcement, and psychological diagnosis.

Sebe doesn't actually spend research time analyzing famous images – that's just for fun. And calling Mona Lisa "happy" is not accurate science, but saying she displays a mixture of emotions is, Sebe says. Why? Because to accurately read an emotional state, a computer needs to analyze changes in expression against a neutral face, which Da Vinci did not provide.

If that's the case, are computers even close to reading emotions? You bet.

Computers can now analyze a face from video or a still image and infer almost as accurately as humans (or better) the emotion it displays. It generally works like this:

1. The computer isolates the face and extracts rigid features (movements of the head) and nonrigid features (expressions and changes in the face, including texture);

2. The information is classified using codes that catalog changes in features;

3. Then, using a database of images exemplifying particular patterns of motions, the computer can say a person looks as if they are feeling one of a series of basic emotions – happiness, surprise, fear – or simply describe the movements and infer meaning.

Rosalind Picard is a contagious bundle of excitement when she talks about "Mind Reader," a system developed by her team in the Affective Computing Group at the Massachusetts Institute of Technology in Cambridge, Mass.

"Mind Reader" uses input from a video camera to perform real-time analysis of facial expressions. Using color-coded graphics, it reports whether you seem "interested" or "agreeing" or if you're "confused" about what you've just heard.

The system was developed to help people with autism read emotions, as they have difficulty decoding when others are bored, angry, or flirting. Their lack of responsiveness makes them seem insensitive to others. Ms. Picard's team uses cameras worn around the neck or on baseball caps to record faces, which the software can then decode.

Picard, a pioneer in the field, says she learned a broader lesson from this research: If you can teach a person when to be sensitive to others, you probably could teach a machine to do so as well.

Jeffrey Cohn, a psychologist at the University of Pittsburgh, used his knowledge of the human face in behavior research. Mr. Cohn is among the relatively few experts who are certified to use the Facial Action Coding System, which classifies more than 40 action units (AUs) of the face. He is a man who can spot the inner corners of your eyebrows inching medially toward each other and then rising slightly, and call out: "That's AU one plus four," a combination of action units associated with sadness.

"The face is almost always visible," Cohn says. "People communicate a lot about feelings and thoughts through changes in facial expression."

Permissions