Twitter tests new misinformation labels. Will they backfire?

“Disputed,” “Misleading,” or “Stay informed?” As Twitter revamps its misinformation labels for better visibility and utility, concerns arise: Will these labels really help people discern facts? And do they allow Twitter to avoid more important content moderation work?

John Raoux/AP/File
A laptop displays Twitter's login page in Orlando, Florida. Misinformation labels on the social media platform work better when they are non-judgmental and build trust with users, says a designer at Twitter.

Last May, as Twitter was testing warning labels for false and misleading tweets, it tried out the word “disputed” with a small focus group. It didn’t go over well.

“People were like, well, who’s disputing it?” said Anita Butler, a San Francisco-based design director at Twitter who has been working on the labels since December 2019. The word “disputed,” it turns out, had the opposite effect of what Twitter intended, which was to “increase clarity and transparency,” she said.

The labels are an update from those Twitter used for election misinformation before and after the 2020 presidential contest. Those labels drew criticism for not doing enough to keep people from spreading obvious falsehoods. Now, Twitter is overhauling them in an attempt to make them more useful and easier to notice, among other things. Beginning Thursday, the company will start testing the redesigns with some American users on the desktop version of its app.

Experts say such labels – used by Facebook as well – can be helpful to users. But they can also allow social media platforms to sidestep the more difficult work of content moderation – that is, deciding whether or not to remove posts, photos, and videos that spread conspiracies and falsehoods.

“It’s the best of both worlds” for the companies, said Lisa Fazio, a Vanderbilt University psychology professor who studies how false claims spread online. “It’s seen as doing something about misinformation without making content decisions.”

While there is some evidence that labels can be effective, she added, social media companies don’t make public enough data for outside researchers to study how well they work. Twitter only labels three types of misinformation: “manipulated media” such as videos and audio that have been deceptively altered in ways that could cause real-world harm, election and voting-related misinformation, and false or misleading tweets related to COVID-19.

One thing that’s clear, though, is that they need to be noticeable in a way that prevents eyes from glossing over them in a phone scroll. It’s a problem similar to the one faced by designers of cigarette warning labels. Twitter’s election labels, for instance, were blue, which is also the platform’s regular color scheme. So they tended to blend in.

The proposed designs added orange and red so they stand out more. While this can help, Twitter says its tests also showed that if a label is too eye-catching, it leads more people to retweet and reply to the original tweet. Not what you want with misinformation.

Then there’s the wording. When “disputed” didn’t go over well, Twitter went with “stay informed.” In the current test, tweets that get this label will get an orange icon and people will still be able to reply or retweet them. Such a label might go on a tweet containing an untruth that could be, but isn’t necessarily immediately harmful.

More serious misinformation – for instance, a tweet claiming that vaccines cause autism – would likely get a stronger label, with the word “misleading” and a red exclamation point. It won’t be possible to reply to, like, or retweet these messages.

“One of the things we learned was that words that build trust were important and also words that were not judgmental, non-confrontational, friendly,” Ms. Butler said.

This makes sense from Twitter’s perspective, Ms. Fazio said. After all, “a lot of people don’t like to see the platforms have a heavy hand,” she added.

As a result, she said, it’s hard to tell if Twitter’s main goal is to avoid making people angry and alienating them from Twitter instead of simply helping them understand “what is and isn’t misinformation.”

This story was reported by The Associated Press. 

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.