This article appeared in the June 06, 2017 edition of the Monitor Daily.

Read 06/06 edition

Why the internet remains an ISIS training ground

How should tech companies, such as Facebook and Google, respond to terrorists using their platforms to spread hate, recruit, and teach people how to commit attacks? The solution may not be as simple is it appears.

  • Quick Read
  • Deep Read ( 5 Min. )

Tech companies are back at the center of a debate over control of the wild world of the World Wide Web. In the wake of the terrorist attack on London Bridge over the weekend, British Prime Minister Theresa May and London Mayor Sadiq Khan responded with calls to require internet providers and social media companies to shut down extremist forums. But analysts say pushing technology companies to remove extremist content may not be the straightforward solution it seems. Aside from obvious censorship concerns, there are questions about how effective removal can be, as terrorist groups have become adept at hopping from platform to platform. What’s more, driving terrorist communications entirely onto the deep web could hamper intelligence agencies’ ability to monitor their movements. Rather than focusing money and energy on an online game of whack-a-mole, Brookings Institution analyst Eric Rosand suggests a more preemptive approach: Invest in communities. “How do you give them options, other than going online, to search for meaning in their lives? We don’t invest enough in that.”

Sipa/AP
The leader of the militant Islamic State (ISIS), Abu Bakr al-Baghdadi has made what would be his first public appearance at a mosque in the center of Iraq's second city Mosul, according to a video recording posted online on July 5, 2014.

The terrorist attack on London Bridge over the weekend has reignited a debate about tech companies’ level of responsibility in preventing terrorism. Hours after Saturday’s attack, British Prime Minister Theresa May called for a regulatory crackdown on online content and criticized the tech industry for giving extremist ideology “the safe space it needs to breed.”

London Mayor Sadiq Khan echoed that call in a statement Monday. “After every terrorist attack we rightly say that the internet providers and social media companies need to act and restrict access to these poisonous materials,” he said. “But it has not happened ... now it simply must happen.”

But analysts say pushing technology companies to remove extremist content may not be the straightforward solution it seems.

There are expected censorship concerns, but, it’s not as simple as free speech versus security. Some say removing content might not be effective in disconnecting Islamic State (ISIS) recruiters from potential recruits, and may even make it more challenging for intelligence agencies to monitor terrorist plots online. Others suggest focusing on online content is a distraction, and efforts should instead try to prevent those susceptible to extremist messages from seeking them out online in the first place. 

These calls come as reports surfaced that one of the three attackers responsible for Saturday’s terrorist attack may have been radicalized by extremist sermons on YouTube.

ISIS videos and other materials have also surfaced online in the past year that highlight how to maximize damage with a vehicle and knife attack – a script that is eerily similar to the London Bridge attack that left seven dead and 48 injured.

The line between stifling speech and thwarting terrorism

The open nature of the internet has long been criticized by regulatory advocates as offering terrorists a free forum to circulate extremist content. By one count, as many as 90 percent of terrorist attacks in the past four years have had an online component to them. But those opposed to a regulatory approach cite concerns that cracking down on questionable content risks casting too broad a brush, censoring legitimate content. 

When it comes to extremist content, treading that line is tricky. Unlike some content, such as child pornography, holding extreme views isn’t illegal – and neither is broadcasting them in the United States. As such, it takes a value judgment to decide which content to remove.

An algorithm can’t pick up on the necessary nuances to find the line between over-censorship and dangerous extremist content, says Aram Sinnreich, professor of communications at American University in Washington. “There are no paths that preserve anything remotely approaching an open internet, and at the same time preventing ISIS from posting recruitment videos.”

Many large tech companies have tried to compromise by employing an army of human workers to review content flagged by users as problematic. The reviewers use the tech company’s terms of use as guidance, but in the case of extremist content, it’s not always black and white.

But Hany Farid, senior adviser to the nonprofit Counter Extremism Project, says it is possible for an algorithm to find the sweet spot, as long as humans work with it. A computer science professor at Dartmouth College, Dr. Farid helped develop the tool now used by most internet companies to identify and remove child pornography. He has also developed a more sophisticated tool that he says can be harnessed to weed out extremist content.

Farid says internet companies’ concerns about crossing the line into censorship are unfounded.

“I’m not buying the story” that it’s too difficult or there’s a slippery slope leading to more censorship, Farid says. “That’s a smokescreen, saying there’s a gray area. Of course there is. But it doesn’t mean we don’t do anything. You deal with the black and white cases, and deal with the gray cases when you have to.”

Tech companies have gone through “an evolution of thinking” recently and are now more proactively removing content on their own, says Seamus Hughes, deputy director of the Program on Extremism at George Washington University. He points to the 2013 Boston Marathon bombing as a turning point. Investigators found clues that the attackers may have learned how to make a bomb from Inspire magazine, an online, English-language publication reportedly by the organization Al Qaeda.

“It became so there was less of a level of acceptance for general propaganda to be floating out there,” Mr. Hughes says.

In one initiative launched last year, the tech giants are teaming up to make it easier to spot terrorism-related content. Facebook, Microsoft, Twitter, and YouTube have developed channels to share information about such extremist content and accounts so that individual companies can find and take it down more quickly.

Whack-a-mole concerns 

Still, some say that removing content might not actually be an effective approach to stem radicalization and recruitment by terrorist organizations.

One concern is that extremist content will simply move to other platforms.

“It’s sort of a whack-a-mole kind of problem,” says Eric Rosand, senior fellow in the Project on US Relations with the Islamic World at the Brookings Institution and director of The Prevention Project: Organizing Against Violent Extremism in Washington, D.C. “Terrorists will find another way to reach out with propaganda” if it’s removed.

That could mean moving onto smaller platforms with more encryption and less bandwidth to review and remove content.

This content could also be moved to the dark web, a section of the internet that is dense with encryption and challenging for intelligence officials to track. Sure, there’s a limited audience in the dark web, a detail which could reduce recruitment for organizations like ISIS, Hughes says, but those who do make it into the depths of the dark web are particularly dedicated.

And then there’s the question of where intelligence agencies can best keep tabs on extremists, Hughes says. “Is it better for these guys to be on the systems where we know we can [collect information on] them, we know who everyone is, but they can reach more people? Or is it better to push them off to the margins so they’re only talking to who they already were going to talk to to begin with?”

Counter-messaging

Some tech companies and government officials have been weighing alternative options to counteract extremist content. One idea is to harness the tools of the internet and social media to reach people in danger of being radicalized – in other words, use the same tools as ISIS in a sort of counter-messaging effort.

Google’s 2015 pilot project, the “Redirect Method,” tried to target the audience most susceptible to online recruitment and radicalization and, when they searched for certain terms, directed them toward existing YouTube videos that counter terrorists’ messages. The project used similar principles that businesses use to target ads to certain consumers.

Similarly, officials in the State Department’s Global Engagement Center have used paid ads on Facebook as a means of reaching out to young Muslims who may be targeted by extremist recruiters. The ads are for videos and messaging that counteract what they hear from jihadists.

But online content might not be as responsible for radicalizing terrorists as some politicians are implying, says Dr. Rosand of Brookings. “It’s as much about the offline networks, it’s as much about the grievances that drove them to violence, or made them very susceptible to violent messages, as they become radicalized.”

He suggests that politicians instead encourage tech companies to invest in communities by providing other alternatives to the path of terrorism. “How do you give them options, other than going online, to search for meaning in their lives? We don’t invest enough in that.”

( Illustration by Jacob Turcotte. )

This article appeared in the June 06, 2017 edition of the Monitor Daily.

Read 06/06 edition
You've read  of  free articles. Subscribe to continue.