Twitter, Inc. on Tuesday announced it would implement a new comment review tool for its video streaming app, Periscope, in an effort to combat abusive behavior on the platform.
The short-form social networking service acquired Periscope in 2015, and the video app has since attracted millions of users. Periscope allows people to live stream video from their Android or iOS smartphones, and tweet a link to their stream while it is active. Periscope streamers can already choose who can access and comment on their broadcasts and can block specific viewers entirely, but the new feature aims to give "scopers" (people who use the app) another tool to fight abuse.
"We want our community to feel comfortable when broadcasting," Periscope co-founder and chief executive officer Kayvon Beykpour said in a Twitter release. "One of the unique things about Periscope is that you're often interacting with people you don't know; that immediate intimacy is what makes it such a captivating experience. But that intimacy can also be a vulnerability if strangers post abusive comments."
The reporting feature can be accessed by tapping on a comment on a live scope and marking it as problematic, after which it is sent to a random panel of live viewers who act as moderators to determine the comment's abusiveness.
Spammy and hurtful commenting, or trolling, has been an ongoing problem that has dogged social media forums in recent years, as reported by The Christian Science Monitor last year:
Cloaked in a virtual anonymity – whether real or just naively perceived – hosts of individual users will unleash torrents of vile and abusive taunts, especially toward women. Many of these users would probably never behave in such an antisocial way in the ‘real’ world. Yet when amplified by the global digital megaphones now at the tips of nearly every modern finger, many do worse, including threatening rape or other violence or even death.
Different platforms have experimented with a variety of methods of curbing harassment, but most involve some form of employee review of flagged comments. The Periscope strategy differs by relying on other users to decide whether comments or commenters should be removed. The tactic is in part a nod to the real-time nature of the platform but could also have potential to actually train commenters to change their behavior, some observers say.
"It's potentially a good idea," Mark Griffiths, a psychology professor at Nottingham Trent University in England, told the BBC. "People learn from experience, so if somebody writes a comment and gets blocked from the live chat, perhaps they'll see what they wrote in a different light."
Senior Periscope engineer Aaron Wasserman agrees that while the tool is meant to protect scopers and commenters from abuse, it could also end up helping those who post offensive comments on streams.
"It was really important for us to offer a path to rehabilitation," he told Recode. "We're actually inviting you to stick around and do better next time."
While the system could prove beneficial in reducing spam and abuse, Professor Griffiths is also concerned the feature could be used to silence or exploit commenters in a new way.
"There are good intentions behind it, but when it comes to abuse online, things can be quite subjective," he told the BBC. "As with anything like this, these systems can be abused if people want to abuse them, particularly in political conversations."