Does the future of social media really hinge on these 26 words?

|
Greg Nash/Reuters
Republican Sen. John Thune of South Dakota (left) questions Facebook CEO Mark Zuckerberg on Capitol Hill in Washington, Oct. 28, 2020.
  • Quick Read
  • Deep Read ( 6 Min. )

Members of Congress don’t agree on much these days, but there’s one idea both the right and the left support: Something needs to be done to rein in social media companies.

Democrats’ concerns revolve around harassment and misinformation, while Republicans’ focus is on political speech. Both sides have set their sights on a small but critical piece of federal law known as Section 230 of the Communications Decency Act. But such discussions have been clouded by fundamental misunderstandings of what CDA230 is and the role it plays in the legal functioning of the internet today.

This provision shields internet service providers from liability associated with defamatory content posted by users. Twitter and Facebook would be utterly unviable without CDA230. But it also benefits the little sites. An online guest book on a bed-and-breakfast website, for example, would be a legal time bomb for the B&B without CDA230. Here we explore the historic and current implications of the law, and some of the misconceptions that muddle the discussion.

Why We Wrote This

Who bears responsibility for online speech? The question is as old as the internet. Now lawmakers are looking to reform or repeal a piece of legislation that has long undergirded internet systems. But would it actually help?

Members of Congress don’t agree on much these days, but there’s one idea both the right and the left support: Something needs to be done to rein in social media companies. 

Democrats’ concerns revolve around harassment and misinformation, while Republicans’ focus is on political speech. Both sides have set their sights on a small but critical piece of federal law known as Section 230 of the Communications Decency Act. But such discussions have been clouded by fundamental misunderstandings of what CDA230 is and the role it plays in the legal functioning of the internet today.

What is CDA230?

Why We Wrote This

Who bears responsibility for online speech? The question is as old as the internet. Now lawmakers are looking to reform or repeal a piece of legislation that has long undergirded internet systems. But would it actually help?

The Communications Decency Act was a 1996 federal law meant to regulate pornography online. It was mostly struck down as unconstitutional by the Supreme Court just a year later. Section 230 is one of the only remaining pieces of the act, but it has proved foundational to the internet.

This is its key text: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

Effectively, it provides internet services legal immunity for the illegal activity of their users (with a number of exceptions, including intellectual property and sex trafficking law). In other words, if you offer an interactive service online, and someone uses it to post content that opens them to legal liability, you are not open to that same liability yourself.

Why is that so important to the internet?

Let’s first look at publishing liability in traditional circumstances to establish a baseline. To get your message out to the public, you’d need to find someone to print it in their physical medium – say, a magazine. That magazine would have editors, typesetters, printers, and a slew of other people who’d all be involved in getting your words, which they’d have seen, out. 

Now let’s say your words were an obviously malicious lie about your neighbor. Certainly you would be legally responsible for your libel. But because all those magazine staffers had the chance to exert editorial control over your words and they failed to do so, under defamation law the magazine would be legally responsible as well.

The internet began under that model. But web developers quickly figured out how to create pages that asked users to fill out forms, and the content of those forms would become new pages viewable by all – no human oversight needed. This led to bulletin boards, file transfer sites, image galleries, and all sorts of content that was completely user-driven.

But this poses a legal problem. Under traditional publishing liability, a service – say early dial-up provider Prodigy – could be liable for anything its users post. So if you again wrote something obviously defamatory about your neighbor, Prodigy would be on the hook. 

But in the online world, Prodigy wouldn’t know a thing about your post before it went public. Prodigy editors never looked at it. Prodigy designers never placed user text on the page. All that was automated. 

CDA230 bypasses this problem by allowing services to offer the automation necessary for the modern internet to function. It’s most obvious at the large scale; Twitter and Facebook would be utterly unviable without CDA230. But it also benefits the little sites. An online guest book on a bed-and-breakfast
website, for example, would be a legal time bomb for the B&B without CDA230.

But you said that traditional publishing liability is based on the publisher’s knowledge of the users’ words. Shouldn’t social media lose CDA230 protection when they start editing users?

This became a common criticism of Facebook and Twitter after they began to post warnings around misinformation by then-President Donald Trump and 2020 election deniers, and when they began removing their content. And it does make sense based on the traditional understanding of publisher liability: that it is a publisher’s knowledge of and control over a message that creates liability. 

But there are a couple misunderstandings here. 

Critics of social media companies argue that by editing and removing content, the companies have moved from the realm of “platform,” which is protected by CDA230, to “publisher,” which is not. 

But there is no such distinction in CDA230. The law doesn’t mention the word “platform.” The closest it comes is “interactive computer service,” which has been interpreted to refer to everything from your cable company to social media companies to individual websites. The only mention of the word “publisher” is in reference to users, not interactive computer services. Services and users are the only categories the law references. There is no subcategory of “unprotected” service, i.e., what critics claim is a “publisher.”

Also, the argument that social media sites should lose immunity ignores the history that spurred CDA230’s creation. 

Before CDA230, some courts did recognize that traditional publisher liability didn’t make sense when internet services were automated. But they applied that logic in situations when platforms tried to moderate users too.

In the case of Stratton Oakmont, Inc. v. Prodigy Services Co., the court found that Prodigy was liable for a user’s defamatory post because it moderated user posts to create a family-friendly experience. In contrast, “everything goes” rival CompuServe did not moderate and was held not liable for users in Cubby, Inc. v. CompuServe Inc.

Those rulings set up a perverse incentive to let litigious user material prosper, since moderating it would only invite lawsuits. For companies that wanted to offer a pleasant experience, that might mean disabling user content entirely was the only option.

Enter CDA230. Indeed, the section quoted above is titled “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” The point of CDA230 is to encourage internet service providers to moderate their users without fear of blowback.

But don’t social media companies need to be reined in? Shouldn’t we repeal CDA230 to force them to stop censoring people?

This is another common argument around CDA230: Its repeal or reform would restrain social media companies from what they’ve been doing with regard to Mr. Trump among others. Critics in Congress have recommended introducing requirements that CDA230 immunity be dependent on platforms being politically neutral, or limiting their moderation to certain subject matters.

There are a few flaws with this critique. First, CDA230 isn’t actually what allows social media companies to edit user content. That right is enshrined in the First Amendment. Even if CDA230 is repealed completely, Twitter still has the right to publish – and edit – content on its own platform.

There’s also a problem with attempts to reform CDA230. Part of First Amendment jurisprudence is a prohibition against content-based laws; that is, the government cannot pass laws that target expression based on its message. 

That bar against content-based laws would very likely apply to any attempt to impose content requirements upon social media moderation, even if those requirements were to be “politically neutral.” Imposition of a “neutrality” requirement is inherently discriminatory against social media companies’ right to political advocacy, and thus very likely unconstitutional.

Lastly, a complete repeal of CDA230 would likely result in more moderation of user content, not less. With the end of CDA230 immunity, the major concern of social media companies would be to reduce potential liability. Once-borderline cases would be much more likely to be moderated, since the cost of overlooking them would be that much higher.

So what can be done about CDA230?

The better question might be whether CDA230 is truly the problem. 

Democrats have proposed reforming the law as well, most recently in a bill put forward by Sens. Mark Warner, Mazie Hirono, and Amy Klobuchar. But while that bill targets immunity for harassment and discrimination rather than political bias, it’s just as problematic as the reforms proposed by conservative lawmakers. Legal scholar Eric Goldman writes that the bill’s “net effect will be that some online publishers will migrate to walled gardens of professionally produced content; others will fold altogether; and only Google and Facebook might survive.”

Digital rights advocates point to Big Tech’s dominance as the real issue, with the solution being more competition and transparency in the social media marketplace.

As a commentary from the digital rights nonprofit Electronic Frontier Foundation argued in November, “If Congress wants to keep Big Tech in check, it must address the real problems head-on, passing legislation that will bring competition to Internet platforms and curb the unchecked, opaque user data practices at the heart of social media’s business models.”

Before becoming Europe editor at the Monitor, Mr. Bright was the research attorney for the Digital Media Law Project at the Berkman Center for Internet and Society at Harvard University.

 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Does the future of social media really hinge on these 26 words?
Read this article in
https://www.csmonitor.com/Technology/2021/0218/Does-the-future-of-social-media-really-hinge-on-these-26-words
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe