Can computers think? No says Yale expert Roger Schank - at least not yet
If you haven't dashed off to buy a home computer yet, Roger Schank has some advice: Don't. Wait awhile. Ditto for all you school administrators worried about turning out a generation of students who will be all thumbs in the real world unless they have been taught on the latest IBM PC or Apple II. Relax. No need to run out and fill classrooms with blinking screens.
In fact, one of the things that irks this professor of computer science and psychology at Yale is the current television commercial about the kid who is a computer-illiterate.
You know the one. Boy goes off to college and is sent home because he can't keep up with the rest of the silicon-smart set. You see him standing forlornly at the train station.
''I mean, c'mon,'' grouses Schank, who sports a salt-and-pepper beard, as he slumps into a chair in a Boston hotel room. ''We have this belief that you can't succeed unless you know all about computers. There is a paranoia in the country today.''
Schank's message: Except for certain functions and jobs, computers don't do enough yet to be useful for most consumers and schools. If you can't use today's machines without pain, wait. Computers will have to change, not people. That someone involved in the field of computing should be holding up a cautionary hand isn't complete heresy. Others, too, are sounding a warning about what they consider the overselling of computers.
But Schank also believes there is too much hype now about his own special area of computing: artificial intelligence (AI), the attempt to get machines to mimic aspects of human thought.
His point: Scientists are nowhere near creating machines as intelligent as any human being. They probably never will be.
''These all-knowing, omniscient machines don't exist yet,'' he says. ''It's hard for people to understand that there really is a difference between a system that can answer a 10-word question and one that can sense what your goals and aspirations are.''
These are among the themes he strikes in a new book, ''The Cognitive Computer ,''and in an interview during a stopover in Boston.
Schank is chairman of Yale's computer science department and director of the Yale Artificial Intelligence Project. He has also put some of his ideas to the test in the marketplace, starting his own AI company, Cognitive Systems Inc., a fact not lost on some critics who are amused at his diatribes against others for trying to cash in on the rapidly emerging field. Schank's special area of interest is natural language processing - trying to get machines to comprehend ordinary spoken English instead of computer languages.
Computers aren't even close to being intelligent, at least compared with humans, Schank says. To underscore the infancy of the field, he draws a parallel with the development of the automobile. ''The baseline vehicle has just been designed right now,'' he says. ''We don't even know where the end of the car is.''
Schank isn't alone in his concern about false expectations. There is now a lot of soul-searching in the American AI community about overselling its promise. The concern is that too much hype will cause a backlash among consumers , or, worse, among financiers.
One of the hottest commercial technologies right now is ''expert systems,'' machines designed to emulate the reasoning of human experts. Here, too, Schank argues that the systems beginning to appear on the market are not as smart as they sound. True, the machines have already been used with some success to help in prospecting for minerals, diagnosing diseases, and analyzing chemicals. And they show promise for other applications, such as evaluating insurance risks and making some financial decisions.
But, he cautions, the number of areas where machines can outperform human experts is more limited than copywriters suggest. Many of today's systems, too, are based on decade-old programming techniques that have become practical as computer power has become cheaper. ''Between the limited applications and the claims being made, there is a large gap,'' he says. ''No one has figured out a way that they would learn. Take an expert human and put him in a situation that is a little complicated, and he can plan his way out of it with something new. Computers can't do that today.''
Just how intelligent today's machines are (or aren't) Schank illustrates by the tale of a financial advisory software system his company has developed. This silicon stockbroker can manage a portfolio. You can tell it how much money you have (by punching in information on a keyboard), and it will carry on a conversation with you. It might suggest investing in oil stocks. You say you're not interested in oil. It will rearrange the portfolio to accommodate your wishes, based on what it is learning from you.
What the machine won't do, unlike a human broker, is develop its own theory about the stock market. The machine will be quite ''dumb in the sense that it will answer only the questions you ask on the basis of how it was told to answer the questions you ask,'' says Schank. Never-theless, he sees a use for such programs as efficient information carriers: ''These AI programs on the market won't draw judgments. They can deliver information in an effective way. They will make some inferences from what you said and draw some conclusions, but they will be simple.''
The big question is how intelligent machines will ultimately get. Schank sees AI moving from the ability to ''make sense'' of certain information to eventually reaching the level of ''cognitive understanding'' (having limited reasoning capabilities). But he doesn't envision machines achieving human ''empathy.''
Consider a computer that analyzes plane crashes. Schank says it might be possible to design one which, brimming with aircraft design and other data, could draw some conclusions about why the plane went down and help mechanics pinpoint structural or other flaws. It would, in other words, do more than passively parrot back technical information. But it wouldn't go so far as to empathize - to feel any remorse because somebody was killed in the crash.
In other areas, Schank stresses these themes:
* Computer literacy. Don't worry about trying to understand computer programming. It's the machines that have to understand us, not we them. We didn't, he argues, have to understand automobile literacy in order to drive.
* Education. Computers will become a beneficial educational tool only when they help in teaching reading, arithmetic, and reasoning skills - when they add new ways of teaching things that aren't being taught well now. Little software has been developed to do this. He suggests that schools postpone buying computers until more programs are available. He also advocates that districts pursue a pilot school approach: Take one school, stuff it with computers, and test out different software.
* Social impact of AI. Schank sees the ''big brother'' syndrome - more surveillance and the loss of personal privacy because of the growing use of computers - as a real threat.
He minimizes another fear about AI: job displacement. He subscribes to a somewhat rosy school of thought, prevalent among many in the AI community, that machines will mainly take the more mundane jobs people don't like to do or don't do well.
One example: a Soviet foreign policy expert. A machine might someday assimilate all the news reports coming out of the Soviet Union each day. No one person can do that now. In this sense, computers become the crowning act in the Industrial Revolution.
Ultimately, according to this theory, AI becomes a liberating force. ''I think it will change the nature of what it means to be human,'' Schank argues. ''We will come to think of ourselves as important because of the things we can do that computers can't.''