First tentative steps

FIRST IN A THREE-PART SERIES

Kismet's face would never launch a thousand ships.

With Yoda's ears, Marty Feldman's eyes, and enough parts to please a hardware-store owner, the mechanical head might be dismissed as a curiosity. But it has a face with an attitude - or, more accurately, 15 attitudes, from anger and disgust to joy and surprise.

The immediate goal of the project at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory in Cambridge is to develop ways for humanoid robots to communicate their "feelings" to a human "caregiver."

It's one tiny step toward a long-term goal of artificial-intelligence research: to build and program machines that display human-like intelligence.

"My belief is that we, people, are machines," says Rodney Brooks, director of MIT's AI Lab. "So in principle, I see no reason why we can't build a robot that is as capable as a human being." Along the way, many AI techniques that are expected to help lead to such robots are quietly working their way into technologies that support a wide range of human activities - from exploring space, forecasting weather, to sending airline flights to specific airport gates. It will be routine for AI applications to search the World Wide Web and help users install new programs on their computers.

"In fact, if you use the Microsoft Office suite, that little [electronic] paper clip that many people find annoying is actually less annoying than it might otherwise be" because it uses a programming approach that "most people consider to be part of the AI collection of engineering tools," says Patrick Henry Winston, professor of computer science and former head of MIT's AI Lab.

To Ray Perrault, director of the Artificial Intelligence Center at SRI International, a technology research firm in Menlo Park, Calif., such "under the blanket" or embedded AI-related applications signal the field's maturity. "At first, you could see some generic, obviously AI products on the market," he says. "Now, AI ends up more frequently as an infusion of technologies in bigger packages.

"For example, work in the past five years or so has led to methods for extracting information from text. This has become embedded in software products that manage the flow of rsums through human-resources departments."

An 'alien intelligence'

Yet AI occasionally throws off its blanket. One of the most highly publicized examples of AI prowess in a specialized domain came in May 1997, when IBM's Deep Blue forced world chess champion Garry Kasparov to resign the last of six games, after he beat the machine in the first game, lost in the second, and played to draws in three more. A year earlier, Mr. Kasparov had played the machine and beaten it three games to one with two draws.

After the '97 match, Kasparov claimed he could sense in the machine an "alien intelligence," according to Hans Moravec, principal research scientist and a founder of the robotics program at Carnegie Mellon University in Pittsburgh.

IBM representatives immediately dismissed the notion that Deep Blue was intelligent. But Dr. Moravec finds their dismissal too facile, noting that IBM has clung to that position since it first started selling computers and found that calling them electronic brains hurt sales.

"It took a chess grandmaster to make the psychological interpretation," he says.

At the midterm of what MIT's Dr. Winston calls a 100-year enterprise, AI efforts rate an A for their march into the marketplace.

In terms of practical applications, "we've got a pretty good record, given we've only been working at it for 50 years," agrees Raj Reddy, dean of the school of computer science at Carnegie Mellon.

But basic research in artificial intelligence draws only a D from Winston.

"The field has been pulled around to focus on applications because the science questions have been too hard," he says. Moreover, grant money - coming largely from the Defense Department - favored applications.

But he and other researchers say the field is poised for significant breakthroughs.

One factor driving this optimism is the rapid increase of computing power available in small packages at affordable prices.

In 1967, computer memory cost $1 a byte, according to Winston. At those prices, today's $2,000 laptop computer would cost more than $6 billion, before taking inflation into account.

As for speed, Moravec adds that today's machines allow researchers to conduct in minutes highly detailed experiments that once took hours, allowing scientists to sort through possible programming approaches more quickly. "The algorithms we are running are not more complicated than they were in the 1970s," he says, "but computing power and memory are 1,000 times higher."

In addition, pressure is growing in Washington to pour more money into research on information technologies, which would include AI-related work.

In January, the President's Task Force on Information Technologies warned that lackluster federal support for IT research threatens America's competitive edge in the field. To correct the situation, the panel recommends that federal spending on IT research grow by $1 billion, to $1.37 billion in 2004.

If the task force's recommendations are adopted, "the field of AI as a whole will change," says SRI's Dr. Perrault.

Meanwhile, sensor technology is improving, and advances in fields such as neurobiology, psychology, and linguistics are giving AI researchers new clues about how to design systems to mimic human intelligence.

To see how well a system stacks up, researchers have posed a series of questions:

*Does it display goal-oriented behavior and a capacity to adapt?

*Can it learn from experience?

*Does it use vast amounts of knowledge?

*Is it aware of itself?

*Can it communicate with humans through written language or speech?

*Can it tolerate mistakes or vagueness in communication?

*Does it respond in real time?

Human characteristics

"If you look at different characteristics of human beings that lead to intelligent behavior, we seem to have some union of all these things," Dr. Reddy says.

And while AI efforts can lead to technologies that solve problems in ways that have no biological equivalent, "the grand vision is still Lt. Cmdr. Data [of "Star Trek" fame]," says James Hendler, professor of computer science at the University of Maryland at College Park.

"Robotics has made major strides in the past few years. It's on a fast, upward curve," he says.

One of the venues for robotics development is known as the Cog Shop at MIT's AI lab. There, Dr. Brooks and a clutch of graduate students work on Kismet and its larger, older sibling, Cog, a human-scale, waist-up robot, which the research team is using to draw various AI threads together. In particular, the team is using Cog to test ideas for helping robots learn.

"In order to act intelligently, there's a lot of things you have to know about the world," Brooks says. "One approach is to try to tell an AI program everything about it in great detail. We're trying to build a system that can act in the world and interact with people to learn in a faster way."

In particular, the team holds that if a robot is to become human-like, it must be designed to experience the world as humans do, through sensory systems, degrees of movement, and a physical structure humans have.

And by making the robot appear human-like, at least in rough shape, if not in detail, humans will be more comfortable interacting with it.

Drawing on discoveries in human neurobiology, the team has discarded the notion of controlling Cog's motions and interactions between its sensory system and motor system by laying out detailed instructions in a central computer. Instead, they've opted to distribute control and communication throughout the robot.

"It's the interaction of these independent pieces that gives us the interesting behavior," says Brian Scassellati, one of several graduate students working on Cog.

As a result, the team has watched as Cog has learned how to reach for an object, first with the halting steps similar to those seen in infants. And Cog has learned to quickly shift its eyes - two pairs of cameras that allow for stereoscopic, as well as for peripheral and center-of-view vision - to track an object it's viewing.

The system's design also allows for a hands-on approach to teaching the robot how to move - by standing behind it and gently manipulating its arms, much as a golf pro might in helping the weekend duffer improve his swing.

Brooks allows that the first truly human-like robots probably won't appear "in my lifetime."

But research that the team is undertaking with Cog represents "the first baby steps in that direction."

You've read  of  free articles. Subscribe to continue.
QR Code to First tentative steps
Read this article in
https://www.csmonitor.com/1999/0318/p15s1.html
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe