Modern field guide to security and privacy
Kim Kyung-Hoon/Reuters
Baidu's robot Xiaodu (L) is seen on display at the 2015 Baidu World Conference in Beijing, China, September 8, 2015.

Will artificial intelligence revolutionize cybersecurity?

With criminal hackers becoming more effective at breaking into computer systems, cybersecurity researchers, government agencies, and academics are looking to artificial intelligence to detect – and fight – cyberattacks.

Most people probably have no idea they encounter artificial intelligence technology at nearly every turn on the Internet. It's how retailers track shoppers' behavior and show them ads that attempt to match their tastes in clothing or electronics. 

While that's a relatively simply use of artificial intelligence, often known as just AI, researchers, entrepreneurs, and US government officials are investing heavily into moving much more advanced AI into health care for such pursuits as drug research, automotive technology like self-driving cars, and even for teaching computers how to track and defend themselves against hackers. 

In fact, within the past year, security startups, leading academics, government agencies, and some of the largest digital security firms in the country have invested heavily in AI technology for cybersecurity, believing that recent advancements in processing power could allow computers to outperform humans when it comes to many aspects of defending networks.

"Just imagine a world in which bots are out there looking for vulnerabilities and other bots or artificial intelligence is simultaneously poking holes, plugging holes, poking back," said Ryan Calo, a law professor and director of the Tech Policy Lab at the University of Washington, a think tank that examines cybersecurity and AI policy.

Those kinds of systems are already beginning to enter the marketplace. Last year, big data startup Splunk partnered with consulting firm Booz Allen Hamilton to offer artificial intelligence-powered services to help deter attacks. The cybersecurity firm Kaspersky Lab has patented technology aimed at eliminating false positives for machine learning algorithms.

This week, the White House announced it will host a series of summertime workshops to further explore the benefits of AI in the government and the private sector.

"AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery – adding to the challenge of predicting and controlling how complex technologies will behave," said Ed Felten, deputy US chief technology officer, in a statement announcing the initiative. 

Additionally, the Defense Advanced Research Projects Agency (DARPA), the Pentagon's research wing, recently announced plans to develop a program to use AI to uncover culprits – whether criminal gangs or nation-state hackers – behind cyberattacks. 

That's the kind of technology that can provide a leg up to security teams attempting to find attacks in reams of network traffic every day, said Steve MacLellan, chief executive officer of Blue Sky Management and Research, a firm that invests in cybersecurity startups.

"Humans are overwhelmed by data,” said Mr. MacLellan. "The promise of AI says, if I can teach the machine to dynamically adapt. If I’m getting these hundreds of different signals coming in, the machine learning part says ‘Hey, this one is more important than that one.'"

Indeed, the amount of data that cybersecurity professionals and researchers contend with can be overwhelming, and the amount of information on cyberattacks and malware is growing expeditiously every day. 

"As a rule of thumb, AI benefits tremendously the more data that you have," said David Brumley, a computer science professor at Carnegie Mellon University and the cofounder of the cybersecurity startup ForAllSecure.

"We’re really in this nice time period where the amount of data we have and the sophistication of our algorithms give us much more accurate answers," he said.

Similarly, a startup that spun out of the Massachusetts Institute of Technology called PatternEx wants to harness the power of machines to fight off hackers. Its AI2  platform – unveiled in a paper last month – aims to combine big data technology with advanced cybersecurity analysts in hopes of better understanding how to stop cyberattacks. 

Like other systems, AI2 culls networks for suspicious activity using a machine-learning algorithm that’s not supervised by humans. But since automated systems can only detect abnormalities – not attacks – Mr. Veeramacheni designed the program so it doesn't generate an alert every time it spots something unusual, which can cause headaches for security teams that run routine penetration tests.

Instead, AI2 only spits out 100 to 200 threats each day, giving human analysts the ability to label attacks by type, IP address, and similarity with old strains of malware, training the machine to get smarter to hackers.

PatternEx has already tested out the program using data from an unnamed e-commerce site, and plans to roll out the technology to a handful of Fortune 500 companies later this year.

But other companies drawing upon big data and AI to bolster cybersecurity aren't ready to cut the human out of the process entirely.

"The MIT system [AI2 ] is starting out with an unsupervised learning system," said Chris McCubbin, director of data science at Sqrrl, a cybersecurity startup. “There’s a lot of things that are unusual that the system’s not going to know about."

Still, many AI researchers and backers say that AI systems will eventually become smart enough to know the difference between an innocuous computer glitch and a malicious attack.

"As technology grows, you'll have smart houses, you'll have the Internet of things, you'll have all of these things are generating sensor data," said Blue Sky's MacLellan. "You need a platform that can consume that data." 


You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to

QR Code to Will artificial intelligence revolutionize cybersecurity?
Read this article in
QR Code to Subscription page
Start your subscription today