Instagram empowers users to silence harassing commenters

In an effort to make Instagram harassment-free, the photo sharing app is giving users a lot more control over their comment section.

Instagram announced Wednesday, June 22, 2016, that it plans to add a translation button to the app in "the coming month."

Mark Lennihan/AP/File

July 30, 2016

Instagram is introducing a new comment monitoring system that will give individual users control over their comment section in hopes of curbing online harassment.

While Instagram has it own site-wide commenting policies, this new feature will allow users to filter out comments with words they have deemed offensive. Users will also be able to disable comments entirely on a picture by picture basis. 

For years, social media sites played the freedom of speech card and chose not to censor or monitor comments, but that mind-set is shifting, and the need for intervention is starting to overcome the desire for completely free and open discourse online.

In Kentucky, the oldest Black independent library is still making history

“Ten years ago when I was building social tools, when people behaved abusively, I was the guy saying, ‘We believe in free speech, and people are going to be jerks, and it’s not our fault,’” Anil Dash, blogger, entrepreneur, and technologist, told Wired as part of a panel on what Silicon Valley can do to solve online harassment. “I didn’t get it. And that understanding took me 10 years. I mean, I’ve been doxed by people using the tools that I built.”

Instagram (and Twitter this past week) are starting to take steps to address harassment, even if it means less freedom of expression.

“Our goal is to make Instagram a friendly, fun and, most importantly, safe place for self expression," said Instagram's head of public policy, Nicky Jackson Colaco, in a statement to The Washington Post. "We have slowly begun to offer accounts with high volume comment threads the option to moderate their comment experience. As we learn, we look forward to improving the comment experience for our broader community.”

This feature should be open to all users within the next few months, the Washington Post reports.

Internet harassment has become an increasingly big problem as more people take nasty or unfiltered behavior online, protected by the virtual distance and anonymity a computer screen places between the victim and the harasser.

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

A Pew Research Center Study published in 2014 found that four in 10 internet users experience harassment in varying degrees of severity. The odds of being a victim of online harassment increase for young adults – 70 percent 18-24-year-olds have been harassed online, particularly young women who are at a particularly high risk for sexual harassment and stalking online.

Adria Richards, a a DevOps engineer who promotes technical solutions for reducing online harassment, likens the state of the internet to a city that isn’t safe to walk around anymore.

Twitter recently took a step forward in this regard, banning known internet troll Milo Yiannopoulos for harassing Ghostbusters actress Leslie Jones. Early this month, it opened its doors to increasing the number of verified accounts, which means users must uss a real name and photo (restricting the anonymity many trolls hide behind), and offers some tools for filtering comments.

Currently, less than 200,000 of some 310 million active Twitter users have verified accounts.

Is these steps by Twitter sufficient to curb harassment, asks Nick Statt at the Verge:

The truth is that any real solution — verifying more users, implementing more liberal bans, or developing stronger anti-harassment tools — risks fundamentally changing the nature of Twitter. The company is not openly dedicated to any of those strategies, but has instead made small measures in every direction. As frequent Twitter critic and game developer Zoe Quinn has put it, these are "bandaids on bullet holes," and the company steadfastly refuses to admit the full scope of the problem. This isn’t about what Twitter should become, but rather that it should decide to become something — anything, really — other than what it is today.

Instagram already has a combination of banned hashtags and a team of employees who find and remove offensive images from the site. But offering users a way to take control of comment posts, there is another layer of protection against harassers.

While some anti-harassing measures may be unique to the platform – what is a good strategy for Instagram may not work for, say, Snapchat – broader societal solutions are also needed, say observers.

“When I think about solutions, I think about it in a three-pronged approach: a cultural shift, tech solutions, and then the legal aspect,” Anita Sarkeesian, is the founder of Feminist Frequency and who has experienced massive attacks of online harassment for being a female gamer, told Wired. “There are already laws against this stuff. Sending someone a death threat is already illegal, so having it taken seriously is the third prong.”