Will Twitter's long-range plan to tackle online harassment work?
The social network unveiled a new Trust & Safety Council composed of 40 charity groups, researchers, and online privacy advocates in an additional effort to crack down on harassment faced by its users.
Amid mounting concerns about online harassment, Twitter announced a longer-range plan to do more to tackle abuse on its network.
On Tuesday, the social network unveiled a new Trust & Safety Council that includes 40 organizations and academics, such as the National Domestic Violence Hotline, LGBT advocacy group GLAAD, the Anti-Defamation League, and UK-based charity Anti-Bullying Pro.
“With hundreds of millions of Tweets sent per day, the volume of content on Twitter is massive, which makes it extraordinarily complex to strike the right balance between fighting abuse and speaking truth to power,” Patricia Cartes, Twitter’s head of global policy outreach, wrote in a blog post announcing the effort.
The company says it plans to use the group’s input “as we develop products, policies, and programs,” though its still unclear exactly how the new effort will expand its existing programs, which include adding additional tools to report harassment and hiring more people to enforce its policies prohibiting abuse of users on the site.
"I think this council could be a real benefit to Twitter, not just in terms of getting input in terms of what they’re planning and how they’re dealing with [issues such as harassment] but also by creating an actual focal point for discussion," says Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology in Washington, which is a member of the group.
In December, the company slightly revamped its rules to include a new section banning “hateful conduct” that targets users on the basis of their race, nationality, sexual orientation, gender, gender identity, age, disability, or disease.
“I don’t think we will have all the answers, but it is important to seek expertise in these still very young issues,” Nick Pickles, Twitter’s UK head of policy told the Guardian, saying Twitter wouldn’t focus on a single safety project but would emphasize “regular and consistent action” throughout the year. “We want to make sure we hear different views and think about these challenges in the fullest and most nuanced way."
Twitter has long faced criticism for not providing additional protections for users who have been the subject of threats or abuse, particularly in the wake of the Gamergate controversy, where women who criticized misogyny and racism in video games were often met with online harassment that also spilled offline.
Feminist Frequency, founded by the media critic and blogger Anita Sarkeesian – who says she was forced to go into hiding in 2014 after receiving death threats and online threats of a school shooting as a result of her critiques of misogyny in video games – is one of the Twitter council members.
“We suck at dealing with abuse and trolls on the platform…. It’s no secret and the rest of the world talks about it every day,” Dick Costolo, Twitter’s former chief executive, wrote in an internal forum in February 2015.
It marked an unusually frank admission of the company’s struggles, with Mr. Costolo adding that, “We’re going to start kicking these people off left and right and making sure that when they issue their ridiculous attacks, nobody hears them.”
Last month, the service revoked the “verified” check mark of Breitbart.com editor Milo Yiannopoulos, who tweets as @Nero, saying that it had de-verified the conservative blogger’s account because Mr. Yiannopoulos had violated the network’s policy on abusive behavior.
He argued that Twitter had censored his account because of his political views, telling news site Fusion, “effectively they have privileged progressive opinions over mine and reduced my power and influence in the marketplace ... and they’ve done it on a whim, for political reasons, while refusing to explain why.”
The company’s decision prompted a backlash from supporters of Yiannopoulos, who used the hashtag #JeSuiMilo and changed their Twitter avatars to match his own picture.
Because of Twitter's popularity – the site has said 500 million tweets are sent per day – recent reports that the site could start displaying tweets using an algorithm rather than chronologically in a user's timeline also sparked a campaign against the site itself using the hashtag #RIPTwitter, something that appeared to take some Twitter employees by surprise.
"Seriously people. We aren't idiots. Quit speculating about how we're going to 'ruin Twitter' " wrote Brandon Carpenter, a senior software engineer at the company, in a series of tweets last week.
"Wow people on Twitter are mean," he added, prompting a user named Sarah Brown to respond, "Indeed. Be nice if there was some sort of effective anti-harassment policy in place, wouldn't it?"
Ms. Llansó says social media companies such as Twitter have often reached out to advocacy groups to have ad-hoc conversations about how best to address particular instances where many users have faced harassment, but the Trust & Safety Council is the first time Twitter has convened a permanent group to address the issue.
In the wake of the European Court of Justice's 2014 "right to be forgotten" ruling, which required search engines to consider removing particular results if a user requested it – Google also convened a panel of privacy and free speech experts, she notes.
Debates over what types of content are removed – at the request of non-US governments to comply with their laws, by individual users, or by groups concerned about harassment – are often thorny for many social networks, who have been accused of being less than transparent about their content removal policies.
Late last year, transparency advocates hailed the reinstatement of Politwoops – a group of sites that tracked politicians' deleted tweets in 30 countries but was effectively shut down in June by Twitter – as a key milestone for site under the leadership of newly returned chief executive Jack Dorsey.
"Companies are facing a lot of scrutiny about their content policies and rightly so, the decision they’re making about what stays up and what goes down affect the free expression of all of their users," says Llansó, of the Center for Democracy and Technology. "I think the fact that Twitter has put together this council and announced it today is a good sign, and then it’s going to be a matter of seeing actually how it plays out."