Can tweeters be tamed?

In an age of uncivil social media, a simple tweet can bring a torrent of threats and taunts. Can anything be done to stop the 'trolls?'

|
Illustration by Zina Saunders

It was a simple tweet, with just a hint of edge. After police used tear gas and rubber bullets against Black Lives Matter protesters in Berkeley, Calif., on Dec. 6, Kaya Oakes, an author and lecturer who teaches writing at the University of California, Berkeley, posted a note offering students injured in the protest extra time to finish an assignment.

“If any of my #Berkeley students were teargassed, batoned, or shot w/rubber bullets last night, you can have an extension on your essay,” Ms. Oakes tweeted. 

The tweet was tongue-in-cheek, according to Oakes, but it was also a show of support for what she thought was a largely peaceful protest that police met with undue force. But after conservative pundit Michelle Malkin’s Twitchy blog picked up Oakes’s tweet, it took on a life of its own. Over the course of the next two days, Oakes’s tweet showed up on the blog of Megyn Kelly at Fox News, the Fox and Friends’ Facebook page, and a local CBS affiliate.

“It was 24 to 36 hours of just constantly blocking people on Facebook and Twitter,” Oakes says of the deluge of online messages. “And I was getting e-mails, hundreds of e-mails – just the usual ... ‘you’re an idiot.’ Then it started getting into, ‘UC Berkeley should fire you, you’re encouraging students to be vandals.’ That kind of stuff.”

While Oakes never feared for her safety, she did feel overwhelmed and vulnerable – especially when a white-supremacist website picked up the story, complete with pictures of her that the group found online. Today, five months later, she still worries about bloggers dredging up the incident if she were ever to apply for another job.

Oakes’s experience is part of the sharp-elbowed new reality of the Social Media Age. Across the Internet, even relatively innocent tweets or Tumblr posts can now draw hostile comments and opinions from other social media users, quickly degenerating into a raging cyberstorm.

Cloaked in a virtual anonymity – whether real or just naively perceived – hosts of individual users will unleash torrents of vile and abusive taunts, especially toward women. Many of these users would probably never behave in such an antisocial way in the “real” world. Yet when amplified by the global digital megaphones now at the tips of nearly every modern finger, many do worse, including threatening rape or other violence or even death.

Online shaming and attacks are hardly a recent problem, but in the past few months a number of high-profile incidents have magnified the issue. Former Major League pitcher Curt Schilling (@gehrig38 on Twitter) sent out a tweet in February congratulating his daughter for getting accepted into college to play softball – a simple proud father moment – only to have people attack her online with sexually graphic taunts and threats. 

Then actress and University of Kentucky superfan Ashley Judd (@ashleyjudd) was flooded with what she called a “tsunami of gender-based violence and misogyny” after tweeting her fervid support for the Wildcats basketball team during the playoffs in March. In an online maelstrom known as “Gamergate,” feminist critics of the male-dominated video game culture have been harassed by thousands of people on Twitter and other social media outlets in an attempt to ruin their reputations and careers.

And it’s not simply a matter of sticks and stones and names that never hurt. Among the panoply of “trolls” who lurk on social media, many relentlessly “dox” their targets: discovering and posting personally identifiable information about people like their addresses, Social Security numbers, or embarrassing financial documents. Others engage in “revenge porn,” in which angry ex-spouses and ex-boyfriends and girlfriends post intimate private videos or pictures to shame and humiliate their former partners. 

All this, in turn, is spawning a virulent backlash. Online groups – digital vigilantes – are uncovering the anonymous people behind many of the taunts and “outing” them, often leading to the abusers being fired from their jobs. In other cases, public figures like Mr. Schilling and Ms. Judd have begun to fight back vigorously, while many states are trying to reboot the parameters of online behavior by outlawing various forms of cyberharassment.

Amid the flurry of attacks and counterattacks, the question is whether Twitter and other elements of the Wild West Web can be tamed, or better designed to foster civil, democratic conversation, without undermining the unfettered freedom the Internet provides.

Indeed, as more people move online, the long-evolved conventions of spoken language and the rituals of public civility are being challenged as never before. Digital platforms like Twitter, Yik Yak, and YouTube, with their instant global reach and borderless cross-cultural forums, are blurring the lines between public discourse and private conversation. The nature of human communication is changing in ways not seen since Gutenberg.

“I think we are reaching a tipping point,” says Danielle Citron, professor of law at the University of Maryland and author of “Hate Crimes in Cyberspace.” “I’ve been working on this issue since 2007, and it has taken seven years working on people’s consciousness – I think now we’re finally seeing it as a real problem.”

•     •     • 

Zoe Quinn, a body-pierced Millennial who develops video games and writes interactive online fiction, barely remembers the person she used to be before she became the central figure in Gamergate and its online vortex of vitriol. It is an incident that has helped bring the issue of online harassment to the attention of lawmakers as much as any other. 

“When thousands of faceless strangers have set their sights on you, every aspect of your life is bombarded and prodded until who you were before is gone, and your life becomes almost unrecognizable,” Ms. Quinn told a congressional briefing in April. “The girl I used to be used to sit down and check her e-mail at work and get the occasional fan letter, business correspondence, and spam e-mail. These days it’s death threats and graphic fantasies about raping me, often accompanied with my home address and proof that the sender has everything they would need to carry through on it.”

In 2013, Quinn co-developed a popular interactive game called Depression Quest, in which players direct the fictional story of a character attempting to manage his or her illness in a series of everyday events. But last August, after an ex-boyfriend posted a rant about Quinn, accusing her of being unfaithful and sleeping with a journalist to obtain good press for her online game, a virtual culture war about feminism and the treatment of women in the video game industry began to rage.

The boyfriend’s claim was unsubstantiated, but for some reason it resonated, echoing throughout the online gaming subculture, long accused of being misogynistic and fiercely protective of the overly sexualized and violent images of women in many video games. And in a frenzy all too familiar, the mostly anonymous online crowd pounced with a torrent of abuse and threats.

“We need to punish her.... Next time she shows up at a con/press conference/whatever, we move,” one user posted on the site 4chan, an image-based bulletin board in which anonymous users discuss various topics. “We’ll outnumber everyone, nobody will suspect us because we’ll be everywhere. We don’t move to kill, but give her a crippling injury that’s never going to fully heal....”

For months, Quinn endured a nightmare of similar invective. She was doxed – her home address, phone number, and other personal information disseminated across the Internet. Chat rooms and bulletin boards were filled with posts instructing users how to hack her e-mail and harass and stalk her, and nude photos were widely posted in an attempt to ruin her career.

Quinn left her home, stayed with friends for weeks, and lamented that she could no longer feel safe at gaming conferences. Even as Quinn told her story at the congressional hearing – organized by Rep. Katherine Clark (D) of Massachusetts, the National Task Force to End Sexual and Domestic Violence, and other organizations – the #gamergate hashtag on Twitter was abuzz with the familiar vitriol. 

Gamergate is just one example of online discourse run amok. Indeed, 4 out of 10 online users have experienced some form of online harassment, and nearly 1 out of every 5 of all Internet users has experienced severe forms of abuse, including physical threats, stalking, and sustained harassment, according to a Pew Research Center study last year. 

Why is all this happening? 

•     •     • 

Nearly two and a half millenniums ago, Plato, the ur-philosopher of Western politics and culture, did a thought experiment featuring the mythical “ring of Gyges.” It was a kind of imagined technological marvel that could magically make a person invisible to others.

Plato’s question about unfettered human nature, posed in the context of designing the parameters of an ideal society, wondered: Would a citizen wearing the ring and wielding a cloak of anonymity choose to be responsible and maintain ideals of moral behavior? The answer in today’s fast-evolving Digital Age seems disturbingly clear: Probably not.

While the reasons behind the rise of incivility online are complex, the simple explanation is that modern technology offers people cover to express some of the worst impulses in human nature. The same powers of anonymous expression that provide the freedom to voice an idea or opinion or “like” in the world also enable anonymous responses. 

Aggression, tribalism, self-interest, and even cruelty for the sake of cruelty have existed since the beginning of civilization, and in many ways civilized societies have evolved precisely to rein in these types of traits, scholars say. Some behaviors are constrained by civil authorities enforcing laws, others through evolved social graces. 

But the nature of communication has now fundamentally changed as connections are mediated through the glow of electronic screens. 

In “real” life, if a person overheard a father congratulating his daughter for getting into college, he would most likely never think of making a crass quip about assaulting her – at least not without expecting a violent response. The person might make such a quip in private or among a group of rowdy friends, but “we tend not to be rude and vulgar to people who are present to us, precisely because in face-to-face communication we are conscious of our own vulnerability,” says Gordon Coonfield, director of graduate studies in communication at Villanova University in Philadelphia. 

Online, however, human beings are virtually set free from social conventions rooted in mutual vulnerability. “One way of reading the ring of Gyges is, even the ancients understood that one way our behavior can be corrupted is when we’re given the opportunity to do bad things without public scrutiny,” says Evan Selinger, professor of philosophy at the Rochester Institute of Technology in Rochester, N.Y.

In certain environments, numerous experiments show, even the most moral of human beings are susceptible to behaving badly. “Whether it’s anonymity, the capacity to dehumanize others, or the lack of authority – these features bring out the worst in us,” Mr. Selinger says, citing the idea of the “Lucifer Effect” by the famous scholar Philip Zimbardo and his prison experiments, in which subjects created environments straight out of “Lord of the Flies.”

But another reason for the uncivil comments may be that in the Digital Age, at least so far, what happens online is somehow seen as less real, and therefore less serious. “There’s this attitude, even among police officers and judges, that 1s and 0s can’t hurt anyone and that victims can just turn their computers off and ignore them...,” says Ms. Citron at the University of Maryland. “That’s why it’s not easy to get courts to appreciate the increased vulnerability that people experience when their privacy is violated.”

 •     •     •

Yet this is slowly beginning to change. In the past year, state legislators have begun to recalibrate some laws against various forms of harassment to adjust to the Digital Age, and law enforcement officials have slowly begun to take cybercrimes more seriously. At least 14 states have criminalized posting nude images online without a person’s consent, and another 25 states are considering similar strictures. California has led these efforts, and its attorney general, Kamala Harris, has been the most aggressive law enforcement officer in the nation combating the phenomenon of “revenge porn” and such online harassment. 

Earlier this year, her office prosecuted Kevin Bollaert, operator of “revenge porn” website UGotPosted.com. Mr. Bollaert also ran another separate website that charged these women as much as $350 to have their private images removed from his revenge site. It was one of the first major convictions of its kind; in April, a San Diego judge sentenced Bollaert to 18 years in prison for identity theft and extortion.

Yet a host of challenges stand in the way of greater policing of digital offenses. “When a victim goes to state or local police, you’ve got a huge problem,” says Citron. “They’re really terrific at street crime. It’s what they do well. But when they’re confronted with online harassment, it’s technology they don’t know well. They don’t have training, they lack digital expertise, and what you don’t know well, you don’t want to deal with. That’s just human nature.”

Identifying suspects can be difficult. Internet service providers have to be served with a warrant, the identities of suspects at a computer have to be identified, and if they are out of state, extradition requests have to be filed. “Victims are often told just turn your computer off, go buy a gun. This is a civil matter, go to the Feds; everybody wants to push the problem to somebody else,” Citron says.

Still, a crackdown on clear-cut crimes is just one part – and perhaps the easier part – of trying to bring greater order and civility to cyberspace. For private businesses like Twitter, now a central node in the democratic dissemination of news and opinion, rough and offensive – and constitutionally protected – speech is becoming a problem, too. 

“We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them,” Twitter chief executive officer Dick Costolo told his employees in February. “Everybody on the leadership team knows this is vital.”  

Unlike many other social media platforms, Twitter has especially championed anonymous free speech, and for years many people have praised its transformative role in movements like the Arab Spring. In its early years, Twitter only banned impersonating other people and spam. But now the trolls have become so legion that they represent a financial threat to the company: Growth in its 300 million user base has stagnated, and high-profile celebrities and political leaders have fled the platform as they’ve been abused by other tweeters.

In April the company announced a revised user agreement and new behind-the-scenes algorithms. Previously, Twitter would only kick out users who issued “direct, specific threats of violence against others.” Now it will go after those who issue “threats of violence against others or promote violence against others.”

Through an automated system that identifies abusive tweets, the company will begin forcing some users to delete certain tweets before they can log on again. It will also issue a kind of “timeout” for abusive users, temporarily suspending their tweeting privileges. One limitation of all this is that automated algorithms are never 100 percent accurate, and some users may be wrongly punished. Trying to place curbs on offensive language also produces tough trade-offs for a democratic society. 

“How can you assure anonymity for a political dissident or corporate whistle-blower without also offering cover for a terrorist?” asks Aram Sinnreich, a social media ethicist at Rutgers University’s School of Communication and Information in New Brunswick, N.J. “How can you distinguish algorithmically between a nude photo that’s revenge porn, and a nude photo that’s a work of art? There’s no way for a machine to make these judgments, and no one has enough money to hire human beings to make these judgments.”  

 •     •     •

The other way to curb the problem is to let the Internet police itself – an appropriately hands-off approach for unfettered technology. The era of trolls has already given rise to a new kind of aggressive public scrutiny: the advent of “cyber mobs” and crowdsourced vigilantism. In some cases, experts say, the vigilantes are performing a valuable service. They are identifying people behind some of the anonymous attacks and shaming them online, helping to clean up the incivility on the Internet. 

But in other cases, the vigilantes are becoming more of a censuring mob, meting out their own form of justice for comments they don’t like. How do you distinguish between what’s edgy and what’s abusive? 

During the holiday season in 2013, Justine Sacco, the former communications director of IAC, a major Internet media company, infamously tweeted “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” just before boarding a plane to Cape Town, South Africa, to visit her family. 

Ms. Sacco’s quick quip – part of an acerbic, ironic sensibility on display in a number of her other tweets – was meant to be her own quirky kind of social commentary. 

But as she slept during the 11-hour flight, Twitter exploded with outrage, and her post was retweeted countless times. A hashtag even evolved, #HasJustineLandedYet, trending around the world as a cyber mob waited eagerly for the communications executive to get her comeuppance. 

Sacco’s employers at IAC were forced to release a statement while she was in the air: “This is an outrageous, offensive comment. Employee in question currently unreachable on a flight.” They later fired her for the tweet.

“Only an insane person would think that white people don’t get AIDS,” Sacco told British author Jon Ronson, whose 2015 book, “So You’ve Been Publicly Shamed,” explores a number of similar online incidents. 

“Living in America,” she added, “puts us in a bit of a bubble when it comes to what is going on in the third world. I was making fun of that bubble.”

A former chief executive officer of an Arizona-based medical device manufacturer, Adam Mark Smith, made a mean YouTube video at a Chick-fil-A drive-through, harassing an employee to make a political point about the fast-food restaurant’s opposition to same-sex marriage. He endured vociferous public shaming on social media, lost his job and his house, and now is receiving food stamps. 

“There are ethical ways to do a public shaming, of course,” says Jeremy Littau, professor of media sociology at Lehigh University in Bethlehem, Pa. “But the decentralized nature of the Internet means that you lose control of what your followers do, and when that movement becomes a mob, the loose organizing online gives rise to endless rage, where no apology or remorse is enough, and then we lose interest and move to the next outrage while people have their lives devastated. It’s a thorny issue; it doesn’t excuse the original poster’s bad behavior, but it raises questions about whether we are creating a cycle of Internet rage.”

In the end, of course, it would be easier if everyone would temper their comments and treat people with some dignity and respect. Or maybe they could just learn something from 13-year-old
 Mo’ne Davis.

Earlier this year, the Little League female pitching phenom was trolled on Twitter by a baseball player at Bloomsburg University of Pennsylvania. The college player had quipped: “Disney is making a movie about Mo’ne Davis? WHAT A JOKE. That sl-t got rocked by Nevada,” referring to the team that beat Davis’s squad in the Little League World Series. After an online firestorm, the college first baseman was kicked off the team.

Mo’ne, who has a 70 m.p.h. fastball, e-mailed Bloomsburg and asked the school to reinstate the player. “Everyone deserves a second chance,” she told ESPN. “I know he didn’t mean it in that type of way. I know people get tired of seeing me on TV. But sometimes you got to think about what you’re doing before you do it. It hurt on my part, but he hurt even more. If it was me, I would want to take that back. I know how hard he’s worked. Why not give him a second chance?”

Joshua Eaton contributed to this report from Boston. 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Can tweeters be tamed?
Read this article in
https://www.csmonitor.com/USA/Society/2015/0531/Can-tweeters-be-tamed
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe