Opinion: Why bug hunting security researchers are Digital Age heroes
Comments from an Oracle executive disparaging the work of security researchers misunderstands their value and ethic. While hackers poking around in code may irritate software companies, their work has made computers safer for everyone.
Maybe security researchers aren't at the level of decorated World War I Sgt. Alvin York when it comes to heroism. Still, they're pretty heroic considering how much they've made the digital world a safer place.
They risk the wrath of powerful corporations and government agencies to find and report critical vulnerabilities. They'll chance lawsuits and even jail time by poking around in proprietary software. And they're doing it to ensure that stuff gets fixed.
Although finding software flaws can get a security researcher some attention in the tech world, much of the work we do is often thankless. Recently, in some cases, researchers can get paid for their troubles through bug bounty programs, an accepted way to reward people for doing the work most companies should have done in the first place.
But not everyone thinks researchers who uncover flaws should get anything in exchange for their tireless work.
Enter Oracle Chief Security Officer Mary Ann Davidson, who recently penned a screed against independent security research and paying bug bounties.
After an Internet backlash over the comments, the post was promptly removed from her blog on the Oracle site. The company has since issued a statement that Ms. Davidson's views don't reflect its own beliefs and that it "works with third-party researchers and customers to jointly ensure that applications built with Oracle technology are secure.”
Still, Davidson isn't the only executive who is part of a war against security researchers. Plenty of others have said that researchers are nothing more than irritating pains that costs them time, resources, and money to find and fix old or existing products when they'd rather be focused on making new ones.
Many of these executives feel that if the security researcher didn't exist, then security problems would somehow disappear. Or, at the very least, their customers would never find out about them. And that's an extremely troubling outlook.
Someone is going to find the vulnerability eventually. Isn't it better if that someone is a security researcher instead of a rogue nation-state, international espionage team, or cybercriminal who could manipulate that vulnerability to suppress dissidents, to steal intellectual property, or just break into your bank account?
Even so, many companies feel that the effort to deal with vulnerability reports is too much work. Reports take time and effort to validate and once validated even more time to fix and patch.
But that shouldn't be a reason to view vulnerability reporting with apathy or skepticism. Instead, companies could require more information in a vulnerability report to help reduce the false positive rate. In fact, there are third-party firms whose sole purpose is to validate bug reports. That's why all companies (and not just tech companies) should have a public method of reporting problems with their products – a method that provides enough information to weed out most false positives.
No one likes to admit and fix mistakes. Disclosing bugs often leads to bad press and lost money. And while some critics think that security researchers or cybersecurity companies expose flaws to drum up business, the vast majority of flaws never make headlines. Most researchers just want the problems they discover fixed and, on occasion, to be compensated for their hard work.
To ensure that more things are repaired – and patched quickly – security researchers should be encouraged to find fault in our systems. Their work should be celebrated. Let's end this war on researchers and start celebrating them like true heroes.
C. Thomas (aka Space Rogue) is a strategist at the cybersecurity firm Tenable Network Security. You can follow him on Twitter @SpaceRog.