WASHINGTON — It didn't take a surge of computing power to get people thinking about life in a world where machines may be smarter than people.
From the golem of Jewish legend to Mary Wollstonecraft Shelley's "Frankenstein," the prospect of infusing inert matter with thought has fired imagination, especially when the creature runs amok - smashing walls, mulching sheep, or destroying its creator.
By the last half of the 20th century, the "creature" had shifted from a lump of clay to integrated circuits. But the risks of bad or unintended results from contact with things that think are as powerful a concern.
Few expect today's supercomputers to jump the scientist in the lab. The challenge is to humanity itself: If machines are smarter than mankind, what does it mean to be human? Is there more to consciousness than computation? Are there costs to merging too closely with machines?
Hollywood took a light-hearted crack at this issue in the 1957 film "Desk Set." A computer named EMERAC, or "Miss Emmy," threatens to displace the cardigan-sweatered set in the research department, but proves no match for the sublime Katherine Hepburn. While Emmy could calculate the total weight of the earth "with or without humans," she nearly soldered her circuits over the question, "Does the king of the Watusis drive a car?" In the end, Kate's hairpins save the machine from meltdown, jobs are preserved, and faith in humanity restored.
But the mood was not so jovial when World Chess Champion Garry Kasparov threw up his hands and conceded defeat in Game 6 against Deep Blue, an IBM supercomputer that could evaluate 200 million chess moves a second. After the historic 1997 match, Mr. Kasparov said that he had felt "the fate of all humanity on his shoulders."
For one of the fathers of artificial intelligence (AI), this outcome was never in doubt - just how soon it would happen. At a time when many people doubted that computers would have a future, British mathematician Alan Turing wrote a landmark essay, "Can a Machine Think?" (1950). In it, he predicted that by "the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
Expert prediction has not been the most accurate of guides in preparing for a new world of artificial intelligence.
For example, in the late 1940s, the best known computer machine in the world, ENIAC, belonged to the US Army. Emmy's namesake had about 20,000 vacuum tubes and weighed more than 30 tons. Mathematician John von Neumann, the key developer on the ENIAC project, wrote in 1951 that what prevents computing machines from rivaling natural organisms is "the inferiority of our materials." In 1949, Popular Mechanics opined that "computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1-1/2 tons."
The invention of the silicon chip in 1958 shattered the size barrier and opened wide floodgates of speculation on whether and how soon machines would master humans. Consider these visions of the future from three experts in the field of computer intelligence:
*By 2050, machines will have met and exceeded human levels of intelligence. "Rather quickly, they could displace us from existence." Some people may opt to "personally transcend their biological humanity" by uploading themselves into a computer, according to Hans Moravec, founder of the Robotics Institute at Carnegie Mellon University, in "Robot: Mere Machine to Transcendent Mind."
*By 2029, many areas of the brain will have been reduced to algorithms. Neural implants will enhance seeing, hearing, memory, and reasoning. Computers will be doing most of the teaching and much of the learning. Many of the leading artists will be machines. Life will be extended through the use of bionic organs, and most communication won't involve a human. By 2099, there will no longer be any clear distinction between humans and computers, according to Ray Kurzweil, inventor and author, in "The Age of Spiritual Machines."
*Here or coming soon: "unobtrusive computing." Appliances anticipate human needs. Furniture and floors electromagnetically detect gestures. Medicine cabinets monitor pill consumption, toilets perform routine chemical analyses, and both report "aberrations" to the doctor. Insurance companies price policies by personal details, rather than demographics.
"If you eat well, and you're willing to let your life insurance company talk to your kitchen, then you could be rewarded for having a salad instead of a cigarette," according to Neil Gershenfeld, co-director of the Things That Think research consortium at the MIT Media Laboratory, in "When Things Start to Think."
Any one of these visions deeply challenges what it means to be human. "The primary political and philosophical issue of the next century will be the definition of who we are," writes Dr. Kurzweil.
So far, most of the speculation on how to meet AI's challenges to privacy and human dignity have been left to scientists. MIT's Gershenfeld insists that technology will come up with solutions to the privacy problem: Software encryption is too easy for governments to prevent. And individuals who don't want insurance companies plugged into their daily lives don't have to participate.
"The insurance company would not be in the business of enforcing any morality; they would be pricing the expected real cost of behavior," he writes. If you don't want your insurance company to know where you're driving or when you're home, you can always encrypt the data or just pay more for insurance from another company.
In this brave new world, if you can't beat technology, you can always merge with it. Dr. Moravec urges readers not to be alarmed at the prospect of being displaced by robots, because "these future machines are our progeny, 'mind children' built in our image and likeness, ourselves in more potent form. Like biological children of previous generations, they will embody humanity's best hope for a long-term future. It behooves us to give them every advantage and to bow out when we can no longer contribute."
British scientist Kevin Warwick literally stepped into the man/machine interface when he had a silicon chip implanted in his arm on Aug. 24, 1998. He called himself the world's first cyborg - "part man, part machine."
The chip didn't do much more than log on to the computer, open doors, and switch on the lights when he walked into his office at the Cybernetics Department at the University of Reading in England. But it also established a direct link between human and digital capacity, along with the risk that the distinction between the two might be obscured or lost.
In an interview with the Monitor at that time, he said that "the way humans can keep up with machines is to have silicon implants helping our intelligence."
Of his 11 days as a cyborg, he recalls: "The biggest thing was the feeling when the implant was there of being closer to the computer itself. It's what I hadn't expected.... I found myself thinking, 'Am I me? or Am I me and the machine together?' "
The experiment also raises questions about how much we defer to intelligent machines and at what cost to our humanity, he adds.
"In the past, it was OK to get machines to get physical things for us, but now we're starting to do that in ways we need to think about. We're starting to ask the question, 'Why do we exist, because machines do everything that we do and do it better?' And do we want to design machines that take over from us?" he adds.
Other scientists caution that much of this speculation is running far ahead of what is actually being achieved in the lab or ever could be.
"There is nothing in the marketplace of ideas now that justifies these predictions.... It's no different than predicting that in three years we will be visited by aliens from another planet," says Selmer Bringsjord, director of the Minds and Machines Laboratory and Program at Rensselaer Polytechnic Institute in Troy, N.Y.
Moreover, Deep Blue's triumph was not as decisive a defeat for humanity as it first appeared. Since chess reduces well to a series of computations, it's a "remarkably easy" game for a machine, he says.
A tougher test would be to see if a computer could master something that is a snap for a fourth-grader - telling a story. Dr. Bringsjord is eight years into a project to build a formidable silicon Hemingway, which he calls Brutus.1. Brutus writes stories about betrayal, a concept that Bringsjord and colleague David Ferrucci, a scientist at IBM's T.J. Watson Research Center, say they have reduced to an algorithm.
But even if you could reduce all the emotions needed to tell a good story to a mathematical expression (love, unrequited love, hate, fear, self-sacrifice, loyalty, and so on), computers will never be able to best human storytellers, because consciousness is more than calculation.
"It is clear from our work that to tell a truly compelling story, a machine would need to understand the inner lives of his or her characters," he writes. "Future robots may exhibit much of the behavior of persons, but none of these robots will ever be a person; their inner life will be as empty as a rock's."
Nonetheless, machines are advancing ineluctably, and spiritual leaders are not doing enough careful and critical thinking about it, he adds. "We've got to think about what does make us distinctive," he says.
"Could a computer ever demonstrate the poise of a Daniel in the lions' den? If there is a God, and Daniel was talking to that God, then all bets are off. All these other routes of getting to this are inefficient. If some of the claims for a spiritual reality are true, it short-circuits the whole thing," he adds.
Even the suggestion that man is like a machine or that a computer can ever replace humans can be destructive, says Rustum Roy, a materials scientist at Pennsylvania State University.
"There is a price tag to the more intimate integration of humanity with any machine," he adds. "The affirmation that we can all be reduced to computers makes us more like computers."
*Part 1 and 2 of this series ran March 18 and March 25.