AT 2 a.m. last November, Clifford Stoll was awakened by a panicky telephone call from the National Aeronautics and Space Administration (NASA) Ames Laboratory in Iowa: Somebody was breaking into NASA's computers. Soon Dr. Stoll discovered that his own computers were under similar attack. Later that day, the United States news media were buzzing with reports of a computer `worm' that had taken over the Internet, a national network of 60,000 academic, commercial, and government computer systems. It took experts more than three days to destroy the invasion completely.
In July, the worm's alleged author, Robert Morris Jr., was indicted in Syracuse, N.Y., on felony charges of violating the Computer Security Act of 1987. Morris's lawyer has filed four motions to dismiss the case; arguments will be heard October 20.
But on the Internet today, computer security at many installations has gone back to ``business as usual.''
``One particular customer was very worried about the Internet worm and wanted a fix for it,'' says Beverly Ulbrich, product manager for operating system security at Sun Microsystems in Mountainview, Calif. But now, says Miss Ulbrich, 70 percent of that customer's ``several hundred'' computers do not have the fix.
The day the worm hit, computer network experts at the Massachusetts Institute of Technology (MIT) in Cambridge, Mass., guessed that perhaps 6,000 computers had been affected by the program. But only one person actually counted the number of computers visited by the worm.
``Nobody has been doing real grunt research,'' says Stoll, who presented a paper yesterday on the worm's ``epidemiology'' at the 12th National Computer Security Conference in Baltimore. After nearly a year of research, Stoll found that the worm entered only about 2,600 computers.
Stoll, an astrophysicist at the Harvard Smithsonian Observatory in Cambridge, Mass., became a de facto expert on computer security when he helped US and West German officials crack a spy ring that had been using international data networks to break into US military and defense contractor systems. (See diagram.)
``Is it [the computer `virus' threat] worth being concerned about or not?'' he asked in a recent interview. The real danger, Stoll says, is not automated programs but warm-blooded people.
``No virus has been found to infect more than a few percent of computers [susceptible to attack]. The chances of being hit by a computer virus are small...,'' he says. ``It's probably cheaper to make backups and then, if hit, to clean up the mess afterwards, than it is to chase and fight off every possible infection.''
One reason the Internet worm attracted so much attention is that the tricks it used could just as easily have been exploited by an individual seeking to capture or destroy information stored on a computer connected to the network. That focus on computer security has largely been lost, says Donn Seeley, a senior systems programmer at the University of Utah.
``We've fixed the bugs and gone on our merry way as usual. We aren't really prepared for what will happen when the next bugs are discovered,'' Mr. Seeley says.
Because of the worm's notoriety, fixes for the particular security holes that it exploited were available within a matter of days, Seeley says, but other holes often remain for weeks or months after discovery. ``[Computer vendors] don't really like to hear about security holes. They fix them internally as quickly as possible, and it goes though the usual slow release process to get out to the rest of the world.''
When fixes are finally made available, there is no way to force computer system administrators to install them. ``When you come right down to it, I think that people have short memories,'' says Jon Rochlis, assistant network manager at MIT.
The problem, Mr. Rochlis says, is compounded by the proliferation of desk-top computers that have the same computational power and networking capabilities that mainframes had just a few years ago. Often these desk-top wonders have a single user and no person responsible for security and maintenance.
``You have researchers sitting in their labs.... They don't want to take new releases of the operating system, they don't want to read security things,'' says Rochlis. ``What right do I have to walk into their lab and say, `You must run this new release, because I think that it is good for your security?'''
But that same proliferation of computers - most of them connected to networks that eventually connect to the Internet - has made all the computers on the network less secure by increasing the points of access for people who break into computers and making it easier for them to hide.
``People are still facing basically the same old security breaches that they were facing five to 10 years ago,'' says John Gilmore, a computer consultant in San Francisco, who has publicized many security problems. ``The primary problem is a lack of awareness in the people who administer the system.''
Sometimes even the security experts are lax about security on their own machines. Last year, for example, Seeley wrote a paper identifying a weakness in the computer used by the University of Utah; he suggested additional programs that could be installed to fix it. ``My boss refused to install them: He thought they were overkill,'' says Seeley. Last month, a group of undergraduates was caught breaking into faculty accounts on the university's computer system, using the precise hole Seeley had identified in his paper.
``After an incident like this, there is an acute interest in security that lasts a few weeks, all the easy changes are made, and then we forget about it,'' says Seeley.
``We put ourselves at the mercy of the bad guys. We assume that the next people who break our security will not be so evil as to hurt us severely.... We have a relatively open system and we rely on the fact that we are not interesting to protect us from the bad guys who would do us damage,'' says Seeley. He calls such practice ``security through obscurity.''
Many companies with equally open computers ``would be wonderful targets for both industrial espionage and real espionage,'' Seeley says. ``My suspicion is that they don't realize the danger that they are in.''
Attitudes are changing, but slowly. ``When I worked at Sun, I tested security on the internal network, trying to notice ways that people could break in,'' says Mr. Gilmore. ``The reaction that I got from [management] was, `if you are testing security, it must be because you are doing something wrong.''' Indeed, says Gilmore, some computer vendors keep their customers in the dark about holes in computer system security for fear that the information might find its way to people interested in breaking in.
Nowadays some companies are changing their attitudes, with the realization that the computer crackers already know about the holes.
``There are certain instances where it is important to tell people, and there are instances where it isn't,'' says Sun's Ulbrich. ``The issue is that if the fix is out there, the people who are concerned can put the fix in place before the hacker gets to it.''