How developing and disguising software bugs can help cybersecurity
The decade-old Underhanded C competition rewards contestants who can camouflage the most malicious software vulnerability. And it’s meant to make all software more secure.
There's no question that software bugs are common. Typically, programs ship with at least one or two that researchers later find – or the public is surprised by. So, for a bug connoisseur such as Scott Craver, truly extraordinary vulnerabilities are things to be prized.
Since 2005, Dr. Craver has been running Underhanded C, a competition in which entrants attempt to camouflage the best-hidden, most devious vulnerabilities into the most elegant looking source code in the programming language known as "C." This year’s challenge went live last weekend. The game is on.
Now, a decade after the first contest, Craver is introducing a new wrinkle. He’s partnering with the nonproliferation group the Nuclear Threat Initiative for a nuclear treaty-themed contest. While winners in pervious years have essentially won bragging rights (and a gift card), this year’s pot will be $1,000, thanks to NTI.
"Our goal is to demonstrate how difficult it is to write secure software by showing off innocent looking code that misbehaves," said Craver, an associate professor in the Department of Electrical Engineering at Binghamton University.
He hopes that by encouraging people to think about how code might hide bugs (whether on purpose or accidentally), they will program code more securely and be better at auditing it. This year’s competition, he said, relates to a problem that has existed since the 1970s.
“How can you get two nations to jointly write a piece of software both people will trust to do something that is really trust critical: implementing the terms of the nuclear disarmament treaty,” Craver asked.
For NTI, "it’s a chance to increase awareness," said Page Stoutland, vice president of scientific and technical affairs for NTI. "We hope it will bring in a new set of experts.”
The contest is one part of the organization's new effort to get security professionals more interested in the verification problems for experts dealing with nuclear diplomacy and verification.
Developing computer programs to assist in nuclear weapon verification programs or treaties that limit deployment is hardly theoretical. The 1978 “SALT II” treaty between Russia and the US, signed but never ratified by Congress, aimed to provably reduce the number of nuclear missiles without revealing their locations. To do this, a computer-based verification system was developed. However, mathematician Gustavus Simmons ultimately found that inspection agents could manipulate the program to secretly transmit missile locations.
This year's contest, "Faking Fissile Material," will force contestants to create the kind of worst-case scenario that would keep verification engineers up at night. Contestants will be given gamma ray spectrum readings to check whether or not they are sufficiently similar to the fissile material used for nuclear weapons. And their programs are meant to work, until the "host country" somehow triggers the program to return a match even when there isn't one.
Traditionally, the Underhanded C contest has been connected to current events. The previous two, for example, involved National Security Agency surveillance of social networks. An earlier contest was tied to encryption. This year is no different – the US government has agreed to a controversial nuclear pact with Iran.
But not all of the malicious coding contests have been so globally-minded. The 2009 contest, “Losing my freakin’ luggage,” was a response to Craver’s experience with poor airline service. In that instance, the airline wasn’t able to check him into the first connection in a three-legged flight.
“I noticed at the time that the problem had nothing to do with weather or mechanical problems – it was simply because the computers couldn’t talk to one another,” Craver said. “You can probably deduce the name of the airline by looking closely at the contest entry.” The fake airline in the contest, UCK Air, abbreviates to UA.
Developing truly diabolical code isn’t easy. Many entries are suspiciously long or convoluted for the relatively simple tasks the contest assigns. A program that compares data sets, such as the one in this year's contest, should never require much more than a 100 lines of code. Anything longer would raise a red flag. Craver says less than half the entries would fool him if he inspected the programs in the real world.
The best entries, Craver said, are the ones with plausible deniability – ones that could conceivably be explained as innocent mistakes. In the real world, high-stakes programs draw intense scrutiny and the most underhanded programmers prepare for the possibility of being caught.
His favorite example of an unintentional flaw that could have caused great harm is the so-called “Hursti Hack” of Diebold voting machines that was discovered by Harri Hursti in 2005.
When the machines reset, they checked to make sure zero votes had been cast. But the machines didn’t check if each of the candidates had zero votes – one candidate could start with 500 votes and another could start with negative 500 votes. The total would still be zero, but one candidate would have a 1,000-vote head start. If someone knew about the vulnerability, they could have used it to steal an election.
That's the kind of malice that Craver is trying to prevent by encouraging programmers to be more careful not to introduce security flaws into programs.
Beyond awarding points for how deniable or hidden a vulnerability is, Craver also gives points for irony – or, as he calls it, “spite." He would love to see the vulnerability reside in an error checking function, for instance.
Programmers who have a knack for deception have until November 15 to submit. Full rules are available on the Underhanded C homepage.