After 'Facebook killing,' social media confronts its dark side

A Facebook-shared murder video this week is resurfacing hard questions about civility on the internet and whether tech companies do enough to curtail violence, hate, and other abuses on their platforms.

|
Stephen Lam/Reuters
Facebook CEO Mark Zuckerberg spoke during the annual Facebook F8 developers conference in San Jose, Calif, this week.

Can civility survive on an unfiltered internet? Is it a sad-but-inevitable feature of our connected world that we’ll occasionally witness the worst of humanity as well as the best?

Those are some of the hard questions that surfaced this week after the death of Robert Godwin Sr., a retired foundry worker and grandfather, aired on Facebook. Indeed, the video was shocking and startling for its brutality and immediacy. In the span of less than about a minute, a gunman whom police identified as Steve Stephens held up his smartphone camera to Mr. Godwin, and filmed his murder on a Cleveland street.

Of course, all acts of senseless violence are troubling, and the internet has long provided a stage for heinous acts. But the way Godwin’s death was recorded – and uploaded to the world's biggest social media platform – is stirring a reexamination of the role technology should play in an effort to keep our digital public squares freer of violence, hatred, and other abuses.

Godwin’s killing is “an act that hits us hard in the gut,” says Andrea Weckerle, founder of CiviliNation, a group that promotes more civil discourse on the web and and works to prevent online harassment. "This particular situation is one [that] almost everyone can agree is horrifying."

But it’s something that Facebook should have anticipated, too, since rolling out Facebook Live, its real-time video streaming feature. Around the time of its launch last year, Facebook CEO Mark Zuckerberg said Facebook Live was meant to support "the most personal and emotional and raw and visceral ways people want to communicate."

Godwin's murder wasn't streamed live, but the alleged killer did use that platform minutes later to confess to the killing. Mr. Stevens has since taken his own life.

"I would be amazed if the brilliant people at Facebook could not anticipate that someone would use it in this way," says Ms. Weckerle. Therefore, she says, the company needs to rethink how it monitors for the worst kinds of images and videos surfacing on the platform.

“They’ve got to put much greater resources behind monitoring,” she says. “I would love to see these organizations invest as heavily in that as rolling out new functionality on their platforms.”

Seeking a balance

Facebook and other social media companies are still figuring out the right balance for how to monitor for offensive content while preserving users’ ability to upload and share material that may shock some but that others find politically or socially valuable.

Weckerle says civility on the web doesn't have to mean everything becomes "pink ponies and rainbows. We are talking about giving people a chance to participate and to feel safe to express themselves online."

During the company's conference for software developers this week, Mr. Zuckerberg admitted that Facebook needs to do more to remove offensive and criminal content. "We have a lot of work and we will keep doing all we can to prevent tragedies like this from happening."

Facebook does search for – and remove – offensive content on a daily basis. According to Justin Osofsky, the company's vice president of global operations, Facebook wasn’t notified about the Godwin murder video until an hour and 45 minutes after it appeared. Twenty-three minutes later, it disabled Mr. Stephens’s account.

Mr. Osofsky said in a blog post that the company is “constantly exploring ways that new technologies can help us make sure Facebook is a safe environment. Artificial intelligence, for example, plays an important part in this work, helping us prevent the videos from being reshared in their entirety.” He noted that humans are also an integral part of monitoring, noting that “thousands of people around the world review the millions of items that are reported to us every week in more than 40 languages.”

But that’s not good enough, says Carrie Goldberg, a New York attorney who specializes in computer harassment cases, especially ones involving online sexual abuse.

“Listen, social media isn’t intrinsically bad,” she says. “People do bad things on it because social media provides a built-in audience. But that does not absolve these companies from preemptively implementing procedures and security measures that would make it more difficult to act unconscionably.”

For instance, she noted, Facebook recently instituted a photo-monitoring system designed to prevent the sharing of intimate photos posted without the subject’s permission – a problem commonly referred to as “revenge porn.” But all too often, Goldberg and other experts note, social media companies are slow to act to implement new policies and procedures to deal with inappropriate, harmful, and criminal content.

“These companies can’t be reactionary and wait on horrible things to happen in order to create change,” she says. “They have to want to do better and if they don’t understand and embrace that, lawsuits and tighter laws are going to make them.”

The public's role

Law enforcement and government officials regularly request that social media companies remove criminal content or information that threatens national security (and often companies voluntarily take down questionable material). The companies themselves are not liable for the content users post under the 1996 Communications Decency Act. Yet, there is mounting political pressure to hold internet companies responsible. One bill in Congress would pave the way for state prosecutors and victims of sex trafficking to bring cases against websites that may help facilitate sex trafficking.

Overall, the public is increasingly demanding that tech giants do more to clean up their sites, says Hemanshu Nigam, the founder of an online safety and security firm called SSP Blue that provides content monitoring services to tech companies. "Overwhelmingly the public is saying to do something about it," he says. "The community is fed up with it."

His firm employs about 350 human monitors who work 24/7 to review content posted to social media and dating sites that Mr. Nigam would not name for confidentiality reasons. Typically, he says, moderators or content filtering systems will end up removing about 2 percent of the content posted on the site he moderates.

Inevitably, however, offensive content will slip past the moderators and software filters at Facebook and other massive social media sites, or criminals will find other digital platforms on which to broadcast their acts.

“That’s the reality of these systems. They are huge and there’s so much content,” says Kate Klonick, a resident fellow at the Information Society Project at Yale Law School. And in many ways, she says, the nature of the internet means we all may have to confront uncomfortable and disturbing images on our Facebook or Twitter feeds, she says. “You are going to get the good with the bad."

As for the case of the Godwin video, she says, it's not Facebook's fault this tragedy occurred, or that the killing was posted to their platform. But it is holding "a mirror up to ourselves and making us realize that it’s not that great all the time."

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to After 'Facebook killing,' social media confronts its dark side
Read this article in
https://www.csmonitor.com/USA/Society/2017/0420/After-Facebook-killing-social-media-confronts-its-dark-side
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe