The Next 'expert' you consult could be electronic

August 19, 1982

These are humble experts: When they're wrong they stand corrected. When they beat you at chess, they don't gloat. But if chess is their game, they can beat you at chess.

Red Agent, for example, is an electronic replica of the Soviet military mind. It is a computer-program-in-the-making at the Rand Corporation, the southern California think tank. When presented with a military situation, it will respond the way the Soviet leadership is likely to respond.

How does it know?

How can it guess?

Ask the machine, and it will explain - as simply as it can.

Red Agent is an electronic consultant. A product of artificial intelligence research, it is what computer scientists call an ''expert system.'' It is a computer program loaded with the facts, rules, and assumptions of top human experts so it can mimick and sometimes outperform them.

It's a friendly system. When a high-ranking American officer reviewing Red Agent challenged one of its responses, the program produced the rules of thumb it had used to produce it. Like any helpful consultant, it traced its reasoning in language simple enough for a nonprogrammer to understand. In this case, one of these rules was corrected according to what this officer, an expert on the subject, knew of the Soviets and how they waged war.

It was one expert learning from another, a sort of an electronic apprenticeship.

But Red Agent is just one of the more exotic uses of expert systems now emerging. Hundreds of expert systems have been built in the past couple of years. They promise to edge into every field that requires specialized expertise - from analyzing well-drilling logs to scheduling production on a factory floor to setting damages in some kinds of civil suits.

They are leading something of a boom in artificial intelligence research. The promise is enormous. And these systems - which can use off-the-shelf computer hardware - have begun to look commercially practical.

Now once-skeptical IBM, General Motors, Digital Equipment, Texas Instruments, Westinghouse, and 10 or 15 others have begun developing electronic experts. The number of companies and researchers involved in developing expert systems has grown tenfold in the past five years, estimates James Baker, director of systems science for Schlumberger, a multinational manufacturer now field-testing its first expert systems.

''Expert systems'' are a key element of an ambitious, Japanese-government-sponsored plan for a fifth generation of computers, a plan that would develop them as innovators. These computers would speak and understand large chunks of human language. They would hold and efficiently apply vast stores of human knowledge. But so far, the Japanese are still catching up on Western research in the area.

Unlike other kinds of artificial intelligence, expert systems don't generate complex relationships of ideas to mimic human perception and understanding. Rather, they capture the knowledge of an expert. Knowledge engineering is another name for it.

The gist of it is to break down everything an expert knows about his subject - all his insights and intuitions - into networks of ''if this, then that'' type rules. The smallest, simplest expert systems contain about 300 such rules. One being developed now at Carnegie-Mellon University would contain 10,000. The network is then coded into a computer program.

A rule goes something like this: ''If the sky is red at night, then the weather is likely to be calm.'' Fit into a network of others, it becomes the basis for an expert weather-forecasting system.

''Experts have compiled knowledge,'' explains Dr. Mark Fox, director of the Intelligent Systems Laboratory at Carnegie-Mellon's Robotics Institute. ''They see something and they know the answer.

''We have to decompile this knowledge.''

An expert system can replace an expert in any field that has clear boundaries to it. ''One has to look for a closed system,'' notes Dr. Fox. A top executive, for example, usually draws such different information from such a wide variety of sources that a machine couldn't make his or her decisions. A specialist is less safe.

Generally, explains Rand's Ross Quinlan, computer scientists actually interview an expert, or several, over many hours. As they interpret his knowledge into rules fit for a computer to digest, the expert constantly checks and revises them.

When completed, the expert's knowledge is in a form that a nonexpert can tap, just like talking to the fellow himself.

But in the process of taking apart the expert's know-how, the programmers and the expert himself always seem to learn much more about what he knows. This is the hidden agenda in all artificial intelligence work: that the effort to simulate intelligence will teach us better just what intelligence is.

In the case of expert systems, the upshot is that the electronic expert can often become more expert than his tutor.

An expert called Isis will be installed by October in a Westinghouse factory to schedule production on the shop floor. It is an experimental prototype developed at Carnegie-Mellon - one of many. It is just running through its tests now, but human experts in production scheduling say it performs very well, reports Carnegie-Mellon's Dr. Fox.

This is especially happy news, he points out, since human experts have never been very good at it.

An interesting feature of Isis is that it not only can do what it is told, but also that it can figure out when to ignore what it is told. This is an important human capacity.

Its reasoning is based on all the constraints of production - the preference of workers for certain machines, a need for stability in shop routines, available supplies, the order backlog, and all the usual time and money factors. But it relaxes some of these constraints while it tries out alternative schedules, then chooses the most efficient.

Prospector, for example, at SRI International, assesses drilling prospects for the company's oil or mineral potential. This robotic wildcatter faces a different kind of problem: uncertainty.

It uses a semantic net, also called an inference net, in which the rules in it are sewn together with probabilities. If certain facts are known, then there is a probability that another set of data is true. The degree of certainty is carried down the appropriate branches of the semantic net to arrive at a conclusion that carries a certainty factor, weighting its likelihood.

In tests so far, Prospector has arrived at very nearly the same conclusions and followed the same reasoning as the geology experts it learned from. One of its prospects is being drilled now.

IBM is just field-testing a system that diagnoses problems in disk-drive equipment (part of many computers' memories) for its repairmen. ''We find it's very good at the more difficult problems,'' notes Peter Hirsch, director of expert systems at IBM's Scientific Center in Palo Alto, Calif. The simple ones are easily handled by people.

Schlumberger is seeking systems that can interpret drilling logs, which record data from down in the well. From this data geologists divine the underground structure. The company has a good start with Dipmeter Adviser, a program that infers the tilt of the underground earth from the dip meter.

As IBM's Dr. Hirsch puts it, ''There are a lot of oil wells and just a few experts.'' IBM has also built a well-analyzing system.

Dr. Baker at Schlumberger wants to build expert systems that use still deeper geological reasoning to take them beyond what the foremost human geologists know - systems that can take new geological data and gain added leverage from it.

He also sees applying expert systems to engineering problems - ''anything that requires a high level of expertise.'' And ultimately he would like to build an expert system that writes computer software.

Dr. Hirsch speculates that consumers could be using expert systems on their home computers someday for doing their taxes, getting financial advice, and repairing home appliances and cars.

''The basic idea is to make computers much easier to use for the general public,'' Dr. Hirsch says. ''Programming is still very difficult for the non-programmer.''

Indeed, the difficulty of programming computers is one of the chief bottlenecks in an otherwise burgeoning industry. Expert systems, which use languages as much as possible like plain English, are both easy to communicate with and easy to change.

The reasoning of the standard computer is nearly impenetrable to the non-programmer, and changing a program requires reworking of commands buried deep in the minutiae of the program. Expert systems are flexible. The rules can be revised easily.

Where expert systems will lead us no one yet knows, insists Ross Quinlan, a prominent researcher at the more abstract leading edge of this research. ''(The amount of research) is exploding. . . . We can't tell what it's going to be like in five years since the whole thing has changed so much in the last five years. We're still experimenting with different architectures of putting these things together.

''The task is to structure these rules. You can't just throw them in the bag.''

Red Agent remains one of the most provocative applications. It is one facet of Rand's Strategic Assessment Center, a war-game experiment funded by the Defense Nuclear Agency of the US Department of Defense. It includes another program, Scenario Agent, that simulates the political behavior of non-superpower nations in a superpower confrontation.

These mechanical players make better war games for military strategists for two reasons: First, they are more flexible and lifelike than standard computer programs. Second, they are easier to control and monitor and are more consistent than human war-game players.

So human teams can play as Blue Agent - the US forces - and try out different strategies on a consistent and understandable, yet reasonably lifelike opponent.

An unrelated program Rand is developing is Swirl, an expert system that simulates all-out strategic war between an offensive and a defensive power. Phil Klahr, a computer scientist working on the project, describes it as a model that makes it easier to ask, ''What if they did this? How would that affect our plans?''

Dr. Klahr is developing Swirl to show the Air Force the potential of this kind of ''expert'' model. Swirl has a screen that shows materiel moving and bombs exploding - a little reminiscent of an arcade video game.

''There's a lot to be learned from video games,'' Dr. Klahr concurs.