This article appeared in the February 16, 2018 edition of the Monitor Daily.

Read 02/16 edition

Behind a corporate stirring on dubious social media content

Let’s stay with the "good information" theme. A US consumer-goods giant has threatened to pull its ad spending from social media firms that don't do more to weed out offensive content. This piece explores the shift that represents – away from a faith in algorithms and toward a more nuanced view of corporate responsibility.

  • Quick Read
  • Deep Read ( 6 Min. )

This week, a tug of war over online values became more public. The consumer-goods giant Unilever called out the realm of social media as “little better than a swamp,” and threatened to pull ads from platforms whose algorithms promote objectionable content such as hate speech. Media experts say it’s not that Unilever is just trying to be a good global citizen. Yes, the company might win some fans for its stand, but corporate reputations and profits can suffer when ads run next to offensive content masquerading as news or entertainment. Either way, Unilever's move symbolizes rising pressure on the dominant conduits for online ads, Facebook and Google, to adjust their practices. (That’s some context behind a recent algorithm change at Facebook and Google’s announcement, counterintuitively, of a new ad-blocking feature.) Whenever there’s a rise of corporate social responsibility, says communications expert Victor Pickard, it’s usually because of “public pressure, commercial imperatives, and the threat of government regulation.”

Noah Berger/AP/File
Conference workers speak in front of a demo booth at Facebook's annual F8 developer conference, in San Jose, Calif, April 18, 2017. Advertisers worried about losing trust with customers are putting pressure on social media sites to make changes to how and where their ads appear.

Since their ascendance in the 2000s, Google and Facebook have emerged as rule-setters for how businesses and people interact online. The titans of search and social media have largely defined how ads and other corporate content would appear, where they would flow, and the metrics of online advertising success.

That’s starting to change. Companies are seeing that digital marketing can bring increasing opportunities but also reputational risks. Corporations have begun pushing back, often quietly.

On Monday, one top advertiser, Unilever, went public with its criticism, calling social media “little better than a swamp”  and threatening to pull ads from platforms that leave children unprotected, create social division, or “promote anger or hate.” That comes a year after Procter & Gamble adjusted its own ad strategy, voicing similar concerns.

This pressure from the business world parallels the broader push for reform that has emerged since news reports and other investigations have unearthed examples of fake news, extremist material, and graphic content used to manipulate public discourse and sway elections. It’s a complex dynamic, a kind of three-way tug of war among the digital platforms, corporate advertisers, and the media.

The battle is mostly about revenues. But it also involves a clash of ideals.

Gradually, the ideals of techno-optimism – the faith that algorithms can replace human judgment and that society benefits the more information flows – are giving way to a more nuanced view that some information is better than other information and that some of it is not only repugnant, but downright dangerous to social cohesion.

“They’re cognizant of the problems,” says Jason Kint, chief executive of Digital Content Next, a trade group that represents many big entertainment and news organizations. “The technology, it appears, is actually allowing bad actors to amplify misinformation and garbage while at the same time squeezing out the economics of the companies that are actually accountable to consumer trust.”

Trust is a key driver for corporations pushing the social and search platforms to change.

“Fake news, racism, sexism, terrorists spreading messages of hate, toxic content directed at children – parts of the internet we have ended up with is a million miles from where we thought it would take us,” said Keith Weed, Unilever’s chief marketing and communications officer, in a speech Monday to internet advertisers. “This is a deep and systematic issue – an issue of trust that fundamentally threatens to undermine the relationship between consumers and brands.” 

That risk of lost trust with customers threatens mainstream corporations, and the result is pressure on Google and Facebook to make big changes. Americans’ trust in social media and search engines has fallen 11 percentage points since last year, according to the 2018 Edelman Trust Barometer. By contrast, Americans’ trust in traditional and online media rose 5 percentage points.

Afolabi Sotunde/Reuters
A woman stands behind a machine that is part of a toothpaste manufacturing line at the Unilever factory in Lagos, Nigeria, on Jan. 18, 2018. Unilever is among the companies voicing concern about how their ads may appear next to offensive content unless web platforms like Facebook and Google adjust their algorithms.

How election heightened awareness

Change is likely to prove difficult for the digital platforms, for several reasons. First, it’s technically challenging to track down who’s really behind each post, as Facebook discovered when it investigated Russian use of its platform during the 2016 elections. Fortunately for them, the digital platforms don’t necessarily have to ban borderline offenders and confront charges of censorship, media specialists say. Platforms like Facebook may just need to ensure that that objectionable material doesn’t get promoted by the platforms’ algorithms.

That’s a technical issue, because the algorithms have until recently been geared to making money, not policing content. In 2016, for example, when Kellogg’s and Warby Parker were embarrassed by reports that their online ads were showing up on Breitbart, both companies said they had not intended to advertise on the controversial nationalist publication's site. They pointed instead to “retargeting ads,” the technology that allows a company’s ads to follow users to subsequent websites after they have clicked on the company's website.

It was the digital platforms’ lack of transparency about where ads are placed and who sees them that prompted Procter & Gamble’s public criticism last year. Consumer brands are extremely sensitive about the values that their brands are connected with.

“You want to be next to fitting content; it’s really important in media effectiveness,” says Angeline Scheinbaum, a consumer psychology expert at the University of Texas and editor of a new book, “The Dark Side of Social Media: A Consumer Psychology Perspective.” “Now more automated media buying has resulted in advertisers being horrified about where their ad is ending up.”

Fine-tuning algorithms, with profits at stake

The second and bigger difficulty is that changing their practices will likely cause Google and Facebook to lose ad revenue, after several years of huge profits by setting their own rules.

Although a Facebook executive commended Unilever's stand, and said the company would work to meet advertiser expectations, the social media giant faces financial pressures of its own. 

Already, Facebook has seen a decline of 50 million hours in network use because of the company’s new push to increase the quality of interactions rather than the quantity, according to CEO Mark Zuckerberg. Translation: more posts about friends, fewer viral cat videos and fake news posts. Also worrying to Facebook: research firm EMarketer forecasts 2 million users under 25 will quit the social network this year.

On Thursday, Google unveiled an ad blocker for its Chrome web browser, a counterintuitive move from a company that makes the bulk of its money from targeted advertising. The initiative came from a Google-inspired collaboration with advertising and publishing executives aimed at removing online ads that people find most annoying. But in targeting a dozen ad formats, the move will affect revenues most heavily at companies other than Google, and some members of the coalition grumbled that the search giant had dominated the process, according to The Wall Street Journal.

Furthermore, Facebook reportedly successfully lobbied that it should be exempt from the new ad-blocking, and a pop-up ad maker got a partial exemption as well.

The two giants are not hurting financially. EMarketer expects Google and Facebook will capture two-thirds of US digital advertising this year.

'A murky grey area'

Many observers are cautiously optimistic that the three-way tug of war will be resolved.

“Until now, most sites and publishers have focused on cleaning up the illegal content, such as hate speech or pirated content,” Daniel Castro, vice president at the Information Technology and Innovation Foundation in Washington, writes in an email. “But there is a lot more content that is in a murky grey area. And here is where sites may decide the content should remain – lest removing it drive away users – but that they allow advertisers to distinguish what types of content they are willing to advertise near.”

The replacement of techno-optimist ideals with corporate values may not be the ultimate answer, however, if the history of previous media disruptions is any guide.

The rise of mass media more than a century ago unleashed yellow journalism. And the advent of television led to the rigging of network game shows in the 1950s.

“There is a recurring pattern of new media becoming overly commercialized and socially irresponsible,” Victor Pickard, a professor of communications at the University of Pennsylvania in Philadelphia, writes in an email. “Corporations and advertisers rein in these commercial excesses only when it becomes absolutely necessary, and usually to prevent a loss in profit. So more often it is public pressure, commercial imperatives, and the threat of government regulation that incentivizes corporate social responsibility.”

That public discourse will be needed again, Michelle Amazeen, a mass communications professor at Boston University, writes in an email. “What is profitable isn't always what's best for society… There [are] too many conflicting interests to leave it to corporations to regulate social media.”

( Illustration by Jacob Turcotte. )

This article appeared in the February 16, 2018 edition of the Monitor Daily.

Read 02/16 edition
You've read  of  free articles. Subscribe to continue.