Deep Blue's 'Thinking' Was Fast, but Not Deep
LESSONS FROM CHESS
PEOPLE on both sides of this year's man vs. computer chess match show every sign of drawing the wrong conclusions from Gary Kasparov's struggle with IBM's Deep Blue. The machine's startling, first-time-ever win over a world champion at the outset of the match signaled, it seems to all parties, the inevitability of eventual computer dominance of the game. Even Mr. Kasparov's most ardent supporters saw him emerge a victorious but gravely wounded defender of humanity, like Beowulf after the fight with Grendel.
Computer programmers witnessed, in Deep Blue, the vindication of their 50-year pursuit of high-performance computing through a strategy of improving the speed of calculation. That Kasparov ultimately won, though, with two victories in a row to close out the match, reaffirmed the magnificent adaptive qualities of the human mind. This should lead the skilled humans who design supercomputers, for whatever purposes, to rethink their slavish devotion to quantitative, fundamentally "brute force" measures of effectiveness.
The secret to Kasparov's success lay in his ability to place his sharp, attacking style on hold when it grew clear that Deep Blue could not be outpunched tactically. Instead, the human champion switched to a more prudential mode of play, one that deliberately sought ambiguous situations. This approach allowed the computer's weakness in strategic planning to surface, making it vulnerable to the world champion's renewed, better considered offensives.
Kasparov's successful adjustment mirrors those made by other chess masters who have competed against strong computers. Humans have won even under tight time constraints like the 25-minute-per-game limit in the Harvard Cup, where the human margin of victory has increased of late. Indeed, the emergence of computer chess has fostered something of a renewal of strategic thinking in a game that had grown, throughout the 20th century, in sharpness but not depth. Man-machine interaction has improved human play, and should do the same for computer performance.
Too much quantitative focus
Sadly, computer programmers prefer to see in Deep Blue's performance proof of the continuing, and very substantial, gains generated by increases in computing speed. This view encourages a de-emphasis on more qualitative aspects of high-performance computing, those that strive to model human thought rather than to obviate it through ever swifter calculations.
Claude Shannon, the founder of information science, feared that this might happen. He wrote in 1950 that undue concern with the quantitative aspects of high performance would lead to the creation of computers that could "see far but notice little; remember everything but know nothing." Shannon saw chess, with its finite space and simple rules, as a perfect testing ground for assessing advances in computing that had potentially broad applications in commerce, education, national security, etc.
Supercomputers that simulate military conflict, widely used by the US defense establishment, provide a clear example of the sort of problem Shannon identified. These machines, all-seeing data powerhouses, have limited ability to maneuver strategically. Thus, when they were used, in 1990, to assess allied prospects in the looming war against Iraq's Saddam Hussein, nearly all of them mistakenly predicted a tough fight, one that would generate tens of thousands of US casualties.
Welcome computers at chess matches
Since then, defense specialists have encouraged the development of more qualitative judgments in these machines, an improvement largely driven by software rather than raw computing power. Indeed, the decision by the Clinton administration last year to withhold military-simulation software from China, even though the new export-control policy allowed Beijing to purchase supercomputers, implies an emerging awareness of the primacy of ideas over raw calculating power.
Ultimately, the combination of technical and conceptual computing power will produce machines that will revolutionize society and security. But this will require a realization that the fundamental relationship between man and machine is complementary, not conflictual.
The US Chess Federation, for example, should begin by refusing to sanction tournaments that exclude participation by computers, a prevalent practice. It is faintly reminiscent of the 17th-century samurai efforts to outlaw firearms in Japan. In the defense realm, the federal law precluding placement of weapons systems on computerized robots must be repealed. Accidents may happen; but the future of warfare will doubtless include forces in which people and intelligent machines fight side by side.
For IBM and other companies that design high-performance computers, the implications are clear. They must fund more research aimed at improving the qualitative aspects of computing. A balance is needed between brute-force calculating power and algorithms that strive to replicate human thought processes.
In the words of John Rambo, that pulp cinematic icon so often confronted with seemingly hopeless odds, "the human mind is the greatest weapon." If Rambo is right, then there is no cause for the "deep blues." Instead, the computer's remarkable advances should be greeted joyfully, as they reflect the wonder and magnitude of human potential.