Getting computers to think: simulating a Chinese chef, Cyrus Vance

This ''expert'' in Oriental cuisine can concoct a new pork recipe in seconds, but it's never steamed a water chestnut or even entered a kitchen. That's because the expert, dubbed ''WOK'' by researchers at Yale University, is an elaborate new computer program that invents recipes. The program illustrates the current push in computer technology toward giving machines the ability to mimic human thought processes.

If told there's no chicken for a particular dish, WOK could come up with an alternative using pork, juggling ingredients and spices accordingly. A researcher who sampled some of WOK's first concoctions says that ''they don't taste too bad, either.''

But the notion of what constitutes a ''thinking'' computer - and the debate over whether such a device is possible - is creating deep divisions among computer scientists doing the research.

The main objective for some researchers is to use computers to simulate human thought patterns. For this group, the important questions have more to do with how humans retain knowledge and solve problems than with the development of marketable computer technology.

At the same time, at least one expert contends certain areas of human thought will always defy translation into the tight boundaries of computer memory, no matter how powerful the machines become.

Computers will never match human abilities ''when it comes to areas in which real human problems are involved - where a very crucial aspect of what is being talked about is something connected to the humanity of one of the characters,'' says Joseph Weizenbaum, a Massachusetts Institute of Technology (MIT) computer scientist.

Dr. Weizenbaum contends that human concepts are learned on different levels. For instance, the word ''trust'' can be defined in a computer memory in the same way a person might define it. But a genuine understanding of that concept, Weizenbaum says, can't fully be expressed in words.

The group of researchers with the highest profile are those who insist most human thought processes someday will be boiled down to a form a computer can digest. These researchers cautiously emphasize the limited capabilities of current systems. But they also say a new era in computer technology is emerging in which machines will be programmed to reason, make decisions, and even learn.

''We're going to understand the human reasoning process after we build intelligent machines,'' says Nils Nilsson, director of the Artificial Intelligence Center at SRI International in California. ''Those who say we have to first understand the human (thought process) have it backwards.''

There's one point on which experts agree: To teach a computer about the real world, it must understand human language. That's a major challenge for computers because a large part of human communication is based on words and a broad background of general knowledge about the physical world.

Researchers have been successful in creating so-called natural language programs that allow people to ask computers questions in unadorned English. Several of these programs are now on the market.

At the same time, some forms of artificial intelligence are already being used for tasks once considered the privileged domain of human intellect, such as finding underground mineral deposits and making medical diagnoses.

These so-called expert systems mirror the specialized knowledge and reasoning patterns of human experts. Extracting this information from people and structuring it into computer programs is called ''knowledge engineering.''

Mechanized consultants are now being developed to handle such things as accounting, insurance, and even computer maintenance. Some enthusiasts point to this expanding array of expert programs as solid proof that the long-talked-about thinking machines are in the offing.

''We're still very far from having any single program that has the range of knowledge that a normal human being would carry around,'' cautions Herbert Simon , the Nobel-prize winning pioneer in artificial intelligence at Carnegie-Mellon University.

The trick, of course, is staking out only a narrow slice of human knowledge to copy in the computer. But MIT's Dr. Weizenbaum says the razzle-dazzle of emerging expert systems may be somewhat deceptive.

''It [an expert system] shows that a machine can do extremely tricky, intricate, clever things if you pay enough attention to context,'' says Weizenbaum. ''It has nothing to do with intelligence, except that the programmer that did it has to be intelligent.''

Dr. Simon says there are already programs that demonstrate human-style creativity and problem-solving abilities. An example, he says, is BACON, developed at Carnegie-Mellon. If given the basic information available to the astronomer Johann Kepler, BACON can come up with Kepler's Third Law, which relates the distance between a planet and the sun to the time it takes the planet to orbit the sun. It can also sift through data and ''discover'' Ohm's law, which defines the relationship between resistance, voltage, and current in a circuit.

The key to any type of artificial intelligence is the computer software - sets of detailed instructions that tell the computer how to go about solving problems. In the past, these orders have generally been relatively inflexible.

But it takes flexibility and a measure of what can only be called common sense to deal with the real world. Researchers are trying several approaches to giving machines this wider capability. In one, knowledge is reduced to sets of ''if, then'' rules. For instance, the computer can be told that if a vehicle has four wheels, then it's not a motorcycle.

Another approach plugs knowledge into a matrix, showing the computer how different snatches of information are interrelated.

An example of this sort of framework is a Yale program called CYRUS, designed to mimic the knowledge of former Secretary of State Cyrus Vance. When CYRUS was asked: when was the last time your wife met Israeli Prime Minister Menachem Begin's wife? It promptly replied: ''At a state dinner in Israel in January 1980 .''

CYRUS had been programmed with details about state dinners, but never the information about the wives. According to Dr. Janet Kolodner, who developed CYRUS, the program had to go through a multiple-step reasoning process to come up with an answer: First, it determined that the women would have to meet during a social event. Then, it decided the event would have to be of a political nature.

''Since it knows about state dinners, it narrowed in on that,'' says Dr. Kolodner, who now hopes to develop a world affairs expert system capable of offering political advice.

There are, however, glitches in knowledge systems. A Yale program that reads and summarizes news reports one day proclaimed there had been an earthquake. Since no researchers had heard about it on the news, they investigated. It turned out the computer had misunderstood a story headlined: ''Death of Pope shakes United States.''

''Figurative language is still an unsolved problem,'' says Stephen Slade, assistant director of Yale's Artificial Intelligence Project.

But the biggest problem is that even the most cleverly programmed computer usually doesn't retain what it learns. One major focus now is to create computer programs that can learn from experience.

You've read  of  free articles. Subscribe to continue.
QR Code to Getting computers to think: simulating a Chinese chef, Cyrus Vance
Read this article in
https://www.csmonitor.com/1983/0217/021741.html
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe