The arrival of artificial intelligence has come not with a bang but with a whir - the faint whirring of computers.
They're fielding our calls, sifting our records, catching our criminals, even teaching our children. These machines may not be smart in the way scientists define the word. But as business tugs artificial intelligence (AI) out of the laboratory and into the real world, it's changing the way whole industries operate.
Scary stuff? Perhaps, although not in the way many people fear. Scientists say they're years away from building something anywhere near as smart as a five-year-old, much less something that could take over the world. But stealthily, with the tapping of keys and the clicking of a mouse, smart machines are insinuating themselves into our lives.
They'll deliver great benefits - if people, not companies, decide where and how they'll be used.
Anyone wanting to know how much Federal Express charges for a package can dial 800-GOFEDEX and punch 5. A computer answers. And if you speak reasonably clearly, it will tell you the rate for a priority overnight letter between any two zip codes in the country. Getting a computer to recognize words isn't particularly new. What's smart is that the machine understands what certain words mean and carries on a limited dialogue.
Online investors at E*TRADE, for example, can say phrases such as "Buy 100 shares at the market price" or "What's the closing price of the Magellan fund?" and have the computer respond.
Does that make the computer intelligent?
Not according to most scientists. "There's a little more to intelligence than doing one thing well," says Murray Campbell, one of the IBM researchers who created the chess-playing computer that beat world champion Garry Kasparov in 1997.
But businesses are snapping up the technology because they aim to augment the intelligence of their workers, not replace it. "We're bringing artificial intelligence to bear on many, many domains," Mr. Campbell says.
Like grocery shopping. When customers log onto Streamline, a Westwood, Mass., shopping service on the Internet, special offers pop up on screen. They're not random. A desktop computer generates them based on its past experience with the customer and a series of rules dreamed up by marketers.
When a vegetarian orders pork
The rules can be quite complex: every third Monday, offer a 10 percent discount to every customer who has at least one child and orders milk once a month. But if a health-conscious vegetarian suddenly orders a pork roast, the system is capable of abandoning its past rules, guessing there's a party in the making, and offering potato chips with fat-free dip, says Gad Barnea, chief technology officer of Manna Network Technologies in Newton, Mass.
AI is also storming the ivory tower. Last month, a machine called the "e-rater" began grading the two essay questions on the Graduate Management Admission Test, the standard exam for business-school applicants. A human being also grades the essays. If man and machine disagree by more than a point on a scale of 0 to 6, the test goes to another human evaluator.
The e-rater grades primarily writing style. Tom Landauer sells a machine-based service that analyzes essays for content. To program it, the University of Colorado-Boulder professor has it "read" the materials the students write, then has it teach itself what the words mean. "The way you learn word meanings is by being exposed to a lot of language," he says. His system works the same way.
AI is even showing up in toys. LSI, the company that makes the popular plastic Lego blocks, now sells a kit that lets children design working robots. Programmed with sensors and a computer, they can attack intruders, flee from danger, or move around obstacles.
But many artificial-intelligence companies are tackling more mundane, although lucrative, areas such as insurance fraud. Losses run into the billions of dollars. And since insurance adjusters can't scrutinize all the claims that come in, they're buying AI systems that can tease out subtle inconsistencies from dozens, sometimes hundreds, of variables in a claim.
For example, a few months after the Workers Compensation Fund of Utah installed new fraud-detection software, its computers flagged an expensive head-injury claim. A man had fallen from a truck and claimed brain damage. But the computer noticed he was making more than the usual number of doctor's visits.
When investigators followed up, they found the man was shopping doctors to find one who would substantiate his injury. The company denied the claim, prosecuted the worker, and saved itself hundreds of thousands of dollars.
"Any insurance company that isn't using some kind of fraud detection, I think is doing a disservice to its customers," says Robert Short, senior vice president of the Utah insurance company. "We probably saved $2 million more in our first year than we otherwise would have."
Strangely, many AI companies shy away from the term.
"We don't refer to the category anymore as artificial intelligence," says John Mutch, president of HNC Insurance Solutions, which made the software the Utah insurance company uses. That's partly a reaction to all the hype some AI companies stirred up in the 1980s, promises that the technology couldn't fulfill. The reluctance to use the AI tag also stems from business's different mission.
"The [scientific] community at large is still focusing on trying to understand what the human brain does," he says. "The business community ... has largely gone to what I call 'intelligence amplification.' We're trying to make humans more effective at what they do."
HNC's competitor Infoglide Corp. also targets the insurance industry. But genetic specialists are also looking to adapt the software to help in their research. And the US Air Force uses the technology to track international terrorists. The firm's software can search thousands of variables in databases to figure out who someone really is, even if they use assumed names and false addresses.
"We mimic one of the highest forms of human intelligence," says John Valentine, president of the Austin, Texas, company. "From the context of the query, we're able to judge the answer." For example, people can easily judge that a car and bus are similar as forms of transportation, but they also know that they shouldn't park the bus in their garage. Infoglide's software can use similar judgments of context when it tries to figure out whether John Doe of Newark is the same person as J.P. Smith in Manhattan. Thus both men couldn't be the same if their heights differ by six inches. But the computer won't be using inches to determine whether Newark is "close" to Manhattan.
While Infoglide searches for similarities to find people, a Pasadena, Calif., firm uses another pattern-searching technique to smell. Its electronic nose has far fewer receptors than the 1 million in the human nose. And those are plastic "sponges" infused with carbon particles. Each "sponge" swells to a different odor. So by running electricity through the sponge's carbon particles, the computer nose can determine which sponges have swelled and by how much. Then it matches the pattern against its database of other smells. The technique should help chemical and food companies control quality.
Machines can do the routine sniffing: determining that a railroad tank car is carrying the correct chemical and that a printed candy wrapper won't make the chocolate bar smell bad. That will leave a company's human sniffers free to do more complex tasks. In the prototype stage now, Cyrano Sciences hopes to deliver production models by year end.
In many cases, companies are using decades old artificial-intelligence techniques. (Streamline's smart grocery computer works on the principles of an 18th-century mathematician.) But until now, computers weren't fast and cheap enough to take advantage of them. "Users want responsiveness; they want to feel that a computer is having a dialogue with them," says Brian Eberman, project manager and speech scientist with SpeechWorks International. So while speed doesn't make a computer any more intelligent, it certainly makes it much more practical.
Dialogue has long been considered a litmus test of AI. In 1950, English mathematician Alan Turing suggested that a computer was intelligent if it could answer a person's questions and the person couldn't tell if it was a computer or a human typing on the screen.
Today's AI machines are too narrowly focused to pass any such test by a keen observer. Even the researchers who built the Deep Blue chess-playing computer don't consider their creation intelligent because it can only play chess. It will be some time, they say, before computers become powerful enough to exhibit general intelligence.
But with commerce now latching onto the technology, AI research will likely mushroom and accelerate. And the first really intelligent machine may achieve "consciousness" in someone's garage rather than a university lab.
So there is reason to watch this technology. But the biggest immediate danger is not that someone will invent a Frankenstein machine with Star Wars power, but that today's much paler imitations of machine intelligence will delve into the secret places of our lives.
If sophisticated software can track terrorists who try to stay hidden, marketers inevitably will use the technology to track consumers. Already, says John Valentine, president of Infoglide, his software can look at peoples' credit-card balances (not their actual purchases, just how much they spend), combine it with other publicly available data, and predict the money they earn, the size of their house, the kind of car they drive, and whether someone in the family is expecting a baby.
Small wonder, then, that some technology executives say that privacy no longer exists.
But it's not likely computers will take over the world anytime soon, AI experts say. When the world's top players showed up for the international bridge championships in France last year, some were spooked by GIB, the only bridge-playing computer invited to the tournament. "It's tempo is bizarre," concedes Matt Ginsberg, the University of Oregon professor who created it. It reaches solutions in ways completely alien to human thinking.
"It's so easy to take this alienness as a threat," he adds. But "my feeling is that this alienness is unbelievably good news because we aren't going to compete with these things.... They're going to make us bigger and better."