Cleveland — "The computer . . . is less like a man than is an amoeba; nevertheless it is more like a brain than any other machine has been before. It is close enough to make men shiver."
When James R. Newman wrote that in his "World of Mathematics" 25 years ago, the modern digital computer was in its swaddling clothes. Since then it has come a long way, from relatively simple and unambitious arithmetical uses to great sophistication, enjoying a worldwide dependence that brooks no retreat.
How much closer has the computer come to making us shiver, to making us wonder if we have cloned our minds?
The "ordinary" uses of computers, those we take for granted, are themselves vastly impressive. This information-processing colossus is now an initimate part of science, law, banking, space flight, medicine, education, manufacturing, and a multitude of other domains. It distributes power, payrolls, and phone calls. It reports, manages, decides, controls, and predicts.
However, it is the "extra-ordinary" capabilities of computing machinery that I wish to examine here. This is the so- called field of "artificial intelligence." Here we must consider automata which write music and poetry, play chess, prove mathematical theorems, converse with humans, recognize voices, identify faces, translate languages, and do a host of other marvels which were once the sole province of man.
A perennial question we all hear is, "Do computers reallym think?" If not now, is it likely -- or possible -- that we can in the foreseeable future invest them with such human characteristics as thought, intelligence, consciousness, and emotion?
For each of these and similar questions, one can identify a hot pitched battle that has been going on for two decades. Proponents of "artificial intelligence" are usualy avid. Opponents are equally enthusiastic. Both tend to be zealots, and the cases often get overstated.
Humans have been interested in finding out how they themselves tick for millennia. Similarly, the desire to imitate living systems has ancient roots. As early as 200 BC, Hero of Alexandria designed an automaton to replace people for opening doors. During succeeding centuries a vast number of simulacra of various animals, including man, were confected. Until this century these devices were all mechanical -- elaborate clockwork contrivances which were made to resemble living creatures. The central idea was that by mimicking nature, we would thereby illuminate actual living processes. This quaint notion is far from dead, as we shall see.
Each generation is awed by the advanced technology of its day. Just as 19 th-century mechanical talking dolls and handwriting automata were marveled at, so too were the walking, talking, electronic robots of the 1930s. But once one sees the (relatively simple) tricks of operation, then the awe, admiration, and possible relevance to nature's secrets evaporate.
It remained for the computer to be sufficiently powerful, complex, and inscrutable to capture and hold our present attention as a serious contender for consideration not only as a true "thinking" machine, but also as a vehicle by which the human capacity to know and understand may be elucidated.
This brings us to an important matter referred to as "black box" equivalence. Two systems are said to be equivalent at some level if at that level they behave alike; that is, a given stimulus produces a particular response. The two systems (say a human and a computer) are to be viewed as black boxes into which we cannot see. Thus, while some set of outside functions may be very much alike , their internal functions may, so far as we can determine, be quite dissimilar.
With this view alone, it is difficult to accept the old hope that by understanding how to construct a mechanical animal which actsm in a lifelike manner we might better comprehend the processes of its living counterpart. Yet some of the present artificial-intelligence enthusiasts hold that by converging on an efficient control design for a robot's arm, for instance, or by evolving an effective pattern recognition system for an artificial eye, we will come closer to an understanding of how the human nervous system probably is organized.
A relevant example taken from chess playing illuminates the problem with this position. The best computer chess programs now perform at the master level. They beat most humans at what used to be considered a game requiring considerable intellectual capability -- or at least specialized intelligence. Now at the black-box, behavioral level human and machine appear to have considerable similarity. One is tempted to assign similar attributes to the two players and say that in winning, the computer "outthinks" the human and is (for this game, at least) more "intelligent."
However, the machine uses rote play. Despite years of diligent effort, chess program designers still have not been able to incorporate significant strategy or game-planning as humans claim to use. Instead, the backbone of automated chess play remains tactics. Each successive board position is viewed as a new game with that position as a starting point. The power of play depends on memory capacity and on procedures which evaluate board position; skill is determined by exhaustive depth of search through potential plays and replies. Thus the computer program is mainly a brute-force trial-and-error operation despite its impressive performance.
Can one tell from an account of a game which player is a human? In a recent carefully controlled test, a chess grandmaster could not distinguish human from machine by examining game transcripts. However, the difference was evident to programming specialists practiced in minute analysis of chess play.
Suppose we imagine a supercapacious, superfast computer of the future, one which may outplay most, if not all, humans simply because it can search alternatives more completely and more accurately, yet still using no humanlike planning, strategy, or global outlook. Should we say the machine is "smarter," is "thinking"?
Part of the problem is semantic. We have no adequate definitions of human terms such as "intelligence." Even experts disagree on both meaning and measure. What then are we to say about artificialm intelligence?
Intelligence, like consciousness, thinking, free will, and similar qualities we have attributed to the human mind are sticky concepts. Operational definitions or tests that are unambiguous and completely defined do not seem to exist, even though everyone "knows" what they mean. As a result, human discussions as to whether machines think are as pointless and inconclusive as those pertaining to most other metaphysical matters.
Most artificial-intelligence (AI) systems do trivial things impressively. To see a machine recognize images of letters and numbers, identify faces and clouds and machine parts ism impressive. However, these acts occur in limited, sharply restricted, and highly artificial arrangements and contexts. The triviality resides in the constrained performance and in the relatively simple and transparent rules of operation. The nontriviality yet to be achieved resides in finding the operational procedures (programs) which will permit much more general and far less rrestrictive performance. But that is so big a jump that the difference is a qualitative one. For example, automated recognition of human faces from simple line images of exactly positioned profiles is easy; but automated recognition of complete faces in arbitrary lighting and orientation is (currently) far beyond our wildest hopes, although humans do it quite well.
Natural-language dialogue provides another compelling example. It is most impressive to eavesdrop on a human and a computer engaged in a teletypewriter-mediated conversation about how the human feels about members of the family. One such conversation went like this:
Human: I hate my dog . . . he loves somebody else.
Computer: Your dog?
Human: He loves my cat.
Computer: Why do you say your cat?
Human: My mother loves my cat.
Computer: Tell me more about your family. Why haven't you spoken of your father, for example?
Human: He's dead.
Computer: Does any part of your body hurt you?
Human: Yes, my nose.
Computer: Does that suggest anything else that belongs to you?
Human: My ears.
Computer: It is important to you that your ears . . .
Human: You are a stupid doctor.
Computer: What makes you think I am a stupid doctor?
Human: You don't know anything.
Computer: We were discussing you, not me . . .
The machine's handling of word meaning and sentence structure is striking. But, once again, the triviality resides in the immensely restricted domain of operation, where carefully tailored special-purposes "rules of the road" have been programmed by the human. And once again the nontriviality inheres in extending such performance to more flexible discourse, with fewer operations that are precisely cut and dried and specified a priori and with significantly expanded spheres of action.
For a final example, consider two areas of computer and human problem solving. Medical diagnosis and mathematical theorem-proving provide seemingly different approaches. Both have had noteworthy successes via automation. Typical computerized diagnostic systems receive symptoms, request additional reports, and then use stored- table lookup and application of probabilistic rules based on physicians' introspection and on medical records. Diagnostic accuracy frequently beats (some) humans. While the results are often impressive , the process is transparent in its simplicity.
The procedural steps in establishing proofs of mathematical theorems are less obvious and often more compelling. Yet despite the rather disparate domains, the two areas relate closely. In arriving at a proof of a theorem, a computer program is handed specific starting conditions, invested with allowable rules of combination and implication, and primed with a set of instructions to search through possibilities for permissible solutions. Some of the machine proofs are judged to be elegant. Still, the game is completely specified beforehand.
Outcomes are impressive to the human observer in both cases only to the extent that the number and complexity of steps intervening between start and finish are initially obscure. Are you reminded of chess?
All this is not to say that many smart people have not worked hard and produced new and often extremely impressive results. Indeed they have. The point is that these results pertain to the easy-to-mimic and relatively trivial aspects of human behavior. Game-playing, simple-image recognition, simplistic Procrustean natural-language dialogue, logical table lookup, and trial-and-error assembly combined with artless inductive inference are the easy-to-pick-up nuggets on the ground of replicating brain function.
The really hard mining operations, though continually talked about, promised, and worked on, have not been touched. Such matters as nontrivial generalization and inductive inference remain elusive. While syntax appears to be reasonably well handled, semantic capability is nowhere in sight. A machine able to pose new problems or to initiate new topics for consideration is at present beyond our dreams. Employment of aesthetic or value judgment and incorporation of constructive curiosity or of emotion are far beyond definition, much less reach. Doing anythingm which has not been specifically instructed by a human, either implicitly or explicitly, is so remote from current capability that little serious effort has been spent on it.
I have accused AI of being simple, or at least shallow. It is not wrong, bad , or even unimpressive to do humble things well. Elegant, effective solutions even to trivial problems have merit. The point is that nontrivial problems lie waiting, and there is no readily perceived bridge from here to there.
However, it now seems to be time to get at some of the deeper questions if we can. Besides the obvious one, touched on above, of how to make computers do some of the more potent things relating to autonomy, general problem solving, less restricted and less special settings, there are important related questions pertaining to people. At the deeper levels of information processing, are we really more than table- lookup, trial-and-error processors? If so, what?
AI has shown us that somem human brain functions may be quite simple, or at least simply mimicked. Until we learn more about all this, we may well assume that men and machines can in many respects be similar. Still, such matters as creative thought appear to be far different from the "simpler" properties. And one suspects that there is a highly nontrivial distinction between playing a game and in perceiving a game to play in the first place.
Artificial-intelligence research can be valuable by helping up to pose and shape such questions. The quest for replicating human function may -- as the ancients believed -- assist in revealing nature, or at least help humans to hold clearer mirrors. We can be usefully moved from naive, simplistic positions. Much as has happened in neurophysiological research over the past decade.
It will be important to make computers wonder as well as to make them respond to set puzzles. Difficult as that may be, there seemingly is no reason in principle why it cannot be done in time. Man may indeed spawn a new and even superior (in some sense) branch on the evolutionary tree. We have already done so with respect to speed and reliability.
Meanwhile, it is important to achieve and maintain perspective on the subject of artificial intelligence. Early promissory notes have been largely unpaid, staunch assertions to the contrary in some quarters notwithstanding. On the other hand, despite violent allegations by some that progress has been essentially nil, much has been learned, and many important questions have been phrased more precisely.
Consequently the old notion that by mimicking nature we will better understand life may have a validity of sorts; illumination may come from the process of tryingm to mimic. That alone can repay considerable research effort.
Today, 25 years or so after its serious initiation, artificial intelligence teaches us two homilies: (1) Undeliverable promises are best avoided, and (2) when in the dark we should not profess to understand. Given those preachments and a strong commitment to get on with it, perhaps AI can write some exciting new chapters in the coming decades.
What sorts of tests might then be made? Turing's famous test would still be good: How well can a computer convince a jurty that it is human when the jury and (hidden) machine engage in unrestricted dialogue using a teletypewriter?
Then again, one might revert to tests using terms I initially discarded. In that case, let me observe that you and I think, are intelligent, and are conscious. To decide whether a computer shares such attributes, let's wait until a machine decides to write an essay like this and then tells us that it wishes to discuss the issues more fully with us.
Then we shall see.