Heartbleed, the recently divulged cyber-vulnerability that exposed websites to a gaping hole in computer security across half the Internet, exposed something else: a shift in US policy over when to keep such vulnerabilities secret – to be exploited by government spies only – and when to disclose and fix them.
Just hours after the National Security Agency was accused in a news report on April 11 of knowing all about Heartbleed two years earlier – and using it to spy while leaving US businesses on the Internet vulnerable – the Obama administration struck back in a statement denying that the NSA knew about it or used it.
The NSA soon tweeted the same. Tweets by spy agencies defending themselves against charges of conducting a global cyber-espionage game – with little regard for the privacy and economic well-being of America’s Internet-dependent society – almost seem the new normal.
What’s become clear, cyber-experts say, is that the NSA and other US spy agencies have long stockpiled cyber-vulnerabilities – identifying, purchasing, or otherwise acquiring obscure flaws in computer code. Those vulnerabilities are then used to craft “exploits” – cyber-weapons or spying tools used to sneak into and spy on, or damage, computer networks worldwide, cyber-security experts say.
But that rampant gathering of cyber-vulnerabilities for weapons and spying may be changing. Nearly a year after former NSA contractor Edward Snowden leaked top-secret documents detailing the agency’s global cyber-surveillance practices, senior White House officials say the Obama administration will soon begin a new evaluation process that more routinely reveals, rather than keeps hidden, the majority of cyber-vulnerabilities, thereby boosting the nation’s cyber-defenses.
“In the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest,” Michael Daniel, the president’s cyber-security coordinator, wrote in an April 28 White House blog post. “Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest.”
That is not the same, he notes, “as arguing that we should completely forgo this tool as a way to conduct intelligence collection, and better protect our country in the long-run.” Mr. Daniel continues: “Weighing these tradeoffs is not easy, and so we have established principles to guide agency decision-making in this area.”
His online comments corroborate those of two other executive branch officials on how the government expects to shift its handling of cyber-vulnerabilities that, like Heartbleed, are among the most potent variety of cyber-threats. They are the “zero day” vulnerabilities, so called because software developers have zero days to patch the vulnerability before attacks based on those flaws begin.
A key example of zero-day use was as part of the Stuxnet cyber-weapon developed, reportedly by the US and Israel, to digitally identify and then wreck Iran’s nuclear fuel refining program. Stuxnet used at least four zero-day exploits just to access the Iranian centrifuge facility. It used other previously undisclosed vulnerabilities to do the damage to the facility’s centrifuges.
Notably, a White House review panel on surveillance practices recommended in December “that US policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on US Government and other networks.”
That view was echoed, too, by the NSA’s new director, Vice Admiral Michael Rogers, in written testimony at his March confirmation hearings. An “inter-agency process” would be used for evaluating cyber-vulnerabilities, he wrote. “The default is to disclose vulnerabilities in products and systems used by the US and its allies.”
It’s a delicate balancing act that requires weighing whether a vulnerability is needed to infiltrate legitimate intelligence targets like terrorist communications networks – or whether keeping it hidden will leave US businesses and networks open to attack from cyber-criminals and foreign intelligence services.
In his much-parsed blog, Daniel listed a variety of tests the new interagency review will use to determine whether cyber-vulnerabilities are to be revealed in order to be patched, or kept for cyber-weapons and spying. Criteria for evaluating vulnerabilities would, he wrote, include the following:
• How much is the vulnerable system used in the core Internet infrastructure, in other critical infrastructure systems, in the US economy and in national security systems?
• Does the vulnerability, if left unpatched, impose significant risk?
• How much harm could an adversary nation or criminal group do with knowledge of this vulnerability?
• How likely is it that we would know if someone else were exploiting it?
• How badly do we need the intelligence we think we can get from exploiting the vulnerability?
• Are there other ways we can get it?
• Could we utilize the vulnerability for a short period of time before we disclose it?
• How likely is it that someone else will discover the vulnerability?
• Can the vulnerability be patched or otherwise mitigated?
“These nine separate points, these questions, are very important because they’re not just about general transparency, but the actual balance that they have to establish in the process for a vulnerability to be kept close, or released,” says Jason Healey, director of the Atlantic Council’s Cyber Statecraft Initiative.
But the many exceptions articulated by the administration so far have also sparked debate over how deep this change really goes, say several cyber-experts and a former government official.
“They’ve had a process for evaluating these vulnerabilities for a long time, and even this new variant has been normal practice for the White House, NSA and Department of Justice for years and years,” says James Lewis, a cyber-security expert with the Center for Strategic and International Studies. “I really don’t see things changing very much, if at all.”
Even well-meaning attempts at better vetting of cyber-vulnerabilities may run up against systemic obstacles that limit any serious change, says Joel Brenner, former head of US counterintelligence in the Office of the Director of National Intelligence from 2006 to 2009.
“Given the list of criteria that has to be considered, given the massive number of vulnerabilities that could be involved, I cannot imagine that they will be dealt with individually at a high level in the interagency process,” says Mr. Brenner, previously the NSA’s inspector general.
Some have speculated the new evaluation will end up undercutting the NSA’s capabilities, forcing it to throw away valuable exploits from its trove.
“There’s catalogs and catalogs and catalogs, volumes in their stockpile,” says John Bumgarner, a former intelligence officer and cyber-conflict expert. “So who is going to go through all those to weed out those more strategic ones?”
Some civil libertarians applaud the Obama administration’s moves so far, but remain skeptical how much impact they will have.
“The bias toward disclosure they are talking about is a good thing,” says Daniel Gillmor, a technology fellow at the American Civil Liberties Union. “But they’ve carved out some exceptions so large you can drive a truck through them. So while I’m glad to see the White House engaged on this, I remain concerned we won’t see much progress, and the Internet will be less secure as a result.”
Even administration supporters say they are circumspect about whether well-meaning changes to the cyber-vulnerability vetting process will have any major impact that results in more systematic releases of critical vulnerabilities that make the country safer.
One fundamental reason that the Obama administration may find it difficult to release cyber-vulnerabilities: the process is stacked in favor of weaponizing them, says a former administration official, who requested anonymity so as not to burn bridges.
“I’m afraid I really don’t think what we are seeing is a huge departure from the way things have been done in the past,” the former official says. “I think Michael Daniel and others in White House mean well ... but my sense is that it all ends up taking place after these tools are developed. So even if there is strong oversight of the tools themselves, a lot of key decisions will already be baked in – made earlier in development process.”
Cyber-weapons development should probably be removed from military control and handed over to civilian agencies more responsive to privacy and cyber-defense concerns, the official argues.
“Right now the intelligence and defense communities are conducting the cyber-research and development,” the official says. “So they’re thinking military and intelligence first and foremost, as they should. But we’ve given the military and the NSA a national burden that frankly is beyond their purview.”
Something similar happened after World War II when control over development of nuclear weapons was removed from the Department of Defense and handed over to the Department of Energy and its national laboratories.
“If we are asking that economic and diplomatic and privacy concerns be factored in – well, then these issues have to be evaluated by a different group much earlier in the process for anything to really change,” he says. “That’s just the way things are.”