Should what you do offline get you banned from social media?

As tech’s relationship with real-world violence comes under increased scrutiny, Facebook, Twitch, and Twitter are moderating users’ offline activities. While some praise the guidelines, others criticize them for infringing on civil liberties.

|
Mike Blake/Reuters
Attendees walk by a Twitch logo painted on stairs during opening day of E3, an annual video game expo in Los Angeles, June 11, 2019. Platforms like Twitch now have policies that take into account offline behavior to tackle extremism and hate on their sites.

Earlier this month, Twitch announced it would start banning users for behavior conducted away from its site.

The move by Amazon’s live-streaming platform involved hiring a law firm to conduct investigations into users’ offline misconduct, a new twist and the latest prominent example of tech companies acting on “off-service” behavior.

How platforms enforce against activities conducted not just on their services but on other sites and offline is often only described vaguely in their rules. But as lawmakers and researchers examine tech’s relationship with real-world violence or harm, this type of moderation is gaining attention.

Facebook’s rules ban users they deem dangerous, including those involved in terrorist activity, organized hate or criminal groups, convicted sex offenders, and mass murderers. People who have murdered one person are mostly allowed, a spokeswoman said, due to the crime’s volume. Last year, Facebook expanded the list to include “militarized social movements” and “violence-inducing conspiracy networks” like QAnon.

Meanwhile, Twitch’s new rules say it may ban users for “deliberately acting as an accomplice to non-consensual sexual activities” or actions that would “directly and explicitly compromise the physical safety of the Twitch community,” categories which a spokeswoman said were intentionally broad.

Twitch’s change in policy largely stemmed from the gaming industry’s #MeToo moment in summer 2020 when the site saw harassment at real-life gaming events and on sites like Twitter and Discord, Chief Operating Officer Sara Clemens told Reuters.

While some groups have praised platforms for being proactive in protecting users, others criticize them for infringing on civil liberties.

“This isn’t content moderation, this is conduct moderation,” said Corynne McSherry, legal director at the digital rights group Electronic Frontier Foundation, who said she was concerned about platforms that struggle to effectively moderate content on their own sites extending their reach.

In interviews, platform policy chiefs described how they drew different lines around off-service actions that could impact their sites, acknowledging a minefield of challenges.

“Our team is looking across the web at a number of different platforms and channels where we know that our creators have a presence ... to understand as best as possible the activities that they’re engaging in there,” said Laurent Crenshaw, policy head at Patreon, a site where fans pay subscriptions for creators’ content.

Looking beyond their own sites has helped companies remove extremists and others who have “learned the hairline cracks” in site rules to stay online, said Dave Sifry, vice president of the Anti-Defamation League’s Center for Technology and Society, which has pushed for major platforms to incorporate this behavior into decisions.

Self-publishing site Medium established off-service behavior rules in 2018, after realizing attendees of the August 2017 white nationalist rally in Charlottesville who had not broken rules on specific sites appeared to be “bad actors on the internet in general,” it said.

Last summer’s protests over the murder of George Floyd prompted Snap to talk publicly about off-platform rules: CEO Evan Spiegel announced Snapchat would not promote accounts of people who incite racial violence, including off the app. In December 2020, TikTok updated its community guidelines to say it would use information available on other sites and offline in its decisions, a change that a spokeswoman said helped it act against militia groups and violent extremists.

Notably, this year, sites like Facebook, Twitter, and Twitch took into account former United States President Donald Trump’s off-service actions that led to his supporters storming the U.S. Capitol on Jan. 6 when they banned him.

From murder to money laundering

Tech companies differ in approaches to off-platform behavior and how they apply their rules can be opaque and inconsistent, say researchers and rights groups.

Twitter, a site where white nationalists like Richard Spencer continue to operate, focuses its off-service rules on violent organizations, global director of public policy strategy and development Nick Pickles said in an interview.

Other platforms described specific red-flag activities: Pinterest, which took a hardline approach to health misinformation, might remove someone who spreads false claims outside the platform, policy head Sarah Bromma said. Patreon’s Mr. Crenshaw said while the subscription site wanted to support rehabilitated offenders, it might prohibit or have restrictions around convicted money launderers or embezzlers using its platform to raise money.

Sites also diverge on whether to ban users solely for off-service activity or if on-site content has to be linked to the offense.

Alphabet’s YouTube says it requires users’ content to be closely linked to a real-world offense, but it may remove users’ ability to make money from their channel based on off-service behavior. It recently did this to beauty influencer James Charles for allegedly sending sexually explicit messages to minors.

Mr. Charles’ representatives did not respond to requests for comment. In a statement posted on Twitter this month he said he had taken accountability for conversations with individuals who he said he thought were over 18 and said his legal team was taking action against people who spread misinformation.

Deciding which real-life actions or allegations require online punishments is a thorny area, say online law and privacy experts.

Linking the activity of users across multiple sites is also difficult for reasons including data privacy and the ability to attribute actions to individuals with any measure of certainty, say experts.

But that has not deterred many companies from expanding the practice. Twitch’s Ms. Clemens said the site was initially focusing on violence and sexual exploitation, but it planned to add other off-site activities to the list: “It’s incremental by design,” she said.

This story was reported by Reuters.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Should what you do offline get you banned from social media?
Read this article in
https://www.csmonitor.com/Technology/2021/0428/Should-what-you-do-offline-get-you-banned-from-social-media
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe