When the Internet breaks, who ya gonna call?
The last time the Internet had a major upgrade was in 1986.
At this point, it’s hard to imagine life without the Internet, at least in the developed world. But buried underneath the breathtaking Web applications and streaming media that we use on a daily basis, the actual software that makes the Internet work is starting to show its age.Skip to next paragraph
Subscribe Today to the Monitor
As recent events have demonstrated all too clearly, the Internet is especially vulnerable to deliberate attacks. Massive networks of hijacked computers, known as “botnets,” can be used to deluge target websites with enough traffic to essentially shut them down, much as a radio station running a call-in contest will have a constantly busy phone number. These attacks succeed, at least partially, because they are able to exploit weaknesses in the existing Internet protocols.
Twitter, the social-networking site with millions of users, was the victim of just such a denial of service (DoS) attack in early August. There is much speculation that the attack was triggered by the postings of a Georgian separatist, but whatever the root cause, all it took was a few keystrokes to unleash a botnet’s fury against Twitter, taking it down for half a day.
Like a jazzy sports car that has never had its oil changed, the underlying protocols of the Internet have remained largely unchanged since it came into being in the mid-1980s. The Internet can be surprisingly fragile at times and is vulnerable to attack.
The Internet evolved from the experimental military ARPAnet project, where technical decisions were made by consensus among the researchers involved. When consensus was reached, changes were made throughout the entire network. As it became clear that there was interest in the uses of the Internet beyond the limited research community it encompassed, the military (and later the National Science Foundation, who inherited the Internet) opened it up gradually to commercial traffic.
“The original was just an experimental demo, not a finished product,” he says. “And ironically, [the originators] were just too good and too clever. They made something that was such a fantastic platform for innovation that it got adopted, proliferated, used, and expanded like crazy. Nothing’s perfect.”
Rather than create a more robust network using the lessons we learned from the ARPAnet and early days of the Internet, we’ve instead been patching it up for the past 2-1/2 decades, Dr. Doyle says.
Unfortunately, the spirit of trust that had typified the ARPAnet and early Internet doesn’t hold up so well today. Many of the underlying computer protocols assume that everyone is an honest player, and increasingly there have been incidents where malicious parties have exploited this trust for their own purposes.
A glaring example is “DNS poisoning.” The domain name system (DNS) is the part of the Internet responsible for turning a name such as CSMonitor.com into the Internet version of a street address, which in this case is 184.108.40.206.
Because DNS servers trust one another, it is possible for a wily wrongdoer to convince the computer to start providing the wrong number for a name and send Web surfers to the wrong website, perhaps a malicious one. That’s probably not a catastrophe when it’s CSMonitor.com, but potentially devastating if it’s BankOfAmerica.com.
Vinton Cerf, widely considered the “father of the Internet,” believes that this problem can be reduced by using cryptography to validate DNS records – but it will take time and maybe a little strong-arming to update the world’s Internet hubs.
Mr. Cerf also thinks that increased use of cryptographic authentication can also help with other areas, such as spam e-mail. Yet to Doyle, it is just another example of the patchwork fixes that he says epitomize the Internet.
The Internet can also suffer problems due to human error, sometimes with frustrating results.