Can AI help Facebook stop discriminatory advertising?

Advertisers have grown accustomed to targeting ads to specific audiences. But the tech giant is hoping to crack down on the use of metadata to exclude minorities from offers of housing, employment, or credit.

|
Jeff Chiu/AP/File
A man walks past a mural in an office on the Facebook campus in Menlo Park, Calif., June 11, 2014. The tech giant announced on Wednesday that it will use machine learning in an effort to comb ads and prevent discrimination.

When faced with a challenge, what’s a tech company to do? Turn to technology, Facebook suggests.

Following criticism that its ad-approval process was failing to weed out discriminatory adsFacebook has revised its approach to advertising, the company announced on Wednesday. In addition to updating its policies about how advertisers can use data to target users, the social media giant plans to implement a high-tech solution: machine learning.

In recent years, artificial intelligence has climbed off the pages of science fiction novels and into myriad aspects of everyday life, from internet searches to health care decisions to traffic recommendations. But Facebook's new ad-approval algorithms wade into greener territory as the company attempts to utilize machine learning to address, or at least not contribute to, social discrimination.

Machine learning has been around for half a century at least but we’re only now starting to use it to make a social difference,” Geoffrey Gordon, an associate professor in the Machine Learning Department at Carnegie Mellon University in Pittsburgh, Penn., tells The Christian Science Monitor in a phone interview. “It’s going to become increasingly important.”

Though analysts caution that machine learning has its limits, such an approach also carries tremendous potential for addressing these types of challenges. With that in mind, more companies – particularly in the tech sector – are likely to deploy similar techniques.

Facebook’s change of strategy, intended to make the platform more inclusive, follow the discovery that some of its ads were specifically excluding certain racial groups. In October, nonprofit investigative news site ProPublica tested the company’s ad approval process with an ad for a “renter event” that explicitly excluded African-Americans. The Fair Housing Act of 1968 prohibits discrimination or showing preference to anyone on the basis of race, making that ad illegal – but it was nevertheless approved within 15 minutes, ProPublica reported.

Why? Because while Facebook doesn't ask users to identify their race and bars advertisers from directing their content at specific races, they have a host of information about users on file: pages they like, what languages they use, and so on. This kind of information is important to advertisers, since it means they can improve their chances of making a sale by targeting their ads toward people who are more likely to buy their product. 

But by creating a demographic picture of a user, this data may make it possible to determine an individual’s race, and then improperly exclude or target individuals. The company's updated policies emphasize that advertisers cannot discriminate against users on the basis of personal attributes, which Facebook says include "race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition." 

There's a fine line between appropriate use of such information and discrimination, as Facebook’s head of US multicultural sales, Christian Martinez, explained following the ProPublica investigation: “a merchant selling hair care products that are designed for black women” will need to reach that constituency, while “an apartment building that won’t rent to black people or an employer that only hires men [could use the information for] negative exclusion.”

For Facebook, the challenge is maintaining that advertising advantage, while preventing discrimination, particularly where it’s illegal. That’s where machine learning comes in.

“We’re beginning to test new technology that leverages machine learning to help us identify ads that offer housing, employment or credit opportunities - the types of advertising stakeholders told us they were concerned about,” the company said in a statement on Wednesday.

The computer “is just looking for patterns in data that you supply to it,” explains Professor Gordon. 

That means Facebook can decide which areas it wants to focus on – namely, “ads that offer housing, employment or credit opportunities,” according to the company – and then supply hundreds of examples of these types of ads to a computer.

If a human “teaches” the computer by initially labeling each ad as discriminatory or nondiscriminatory, a computer can learn to go “from the text of the advertising to a prediction of whether it’s discriminatory or not,” Gordon says.

This kind of machine learning – known as “supervised learning” – already has dozens of applications, from determining which emails are spam to recognizing faces in a photo.

But there are certainly limits to its effectiveness, Gordon adds.

“You’re not going to do better than your source of information,” he explains. Teaching the machine to recognize discriminatory ads requires lots of examples of similar ads. 

“If the distribution of ads that you see changes, the machine learning might stop working,” Gordon explains, noting that these changing strategies on the part of content producers can often get them past AI filters, like your email spam filter. Insufficient understanding of details on the part of machines can also lead to high-profile problems, like Google Photos, which in 2015 mistakenly labeled black people as gorillas.

“Teaching” the machine also means having a person take the time to go through hundreds of ads and label them, as well as continue to check and correct a machine’s work. That makes the system vulnerable to human biases.

“That process of refinement involves sorting, labeling and tagging – which is difficult to do without using assumptions about ethnicity, gender, race, religion and the like,” explains Amy Webb, founder and CEO of the Future Today Institute, in an email to the Monitor. “The system learns through a process of real-time experimenting and testing, so once bias creeps in, it can be difficult to remove it.”

More overt bias issues have already been observed with AI bots, like Tay, Microsoft’s chatbot, who repeated the Nazi slogans fed to her by Twitter users. While this bias may be more subtle, since it is presumably unintentional, it could conceivably create persistent problems.

Unbiased machine learning “is the subject of a lot of current research,” says Gordon. One answer, he suggests, is having a lot of teachers, since it offers a consensus view of discrimination that may be less vulnerable to individual biases.

Since October, the company has been working with civil rights groups and government organizations to strengthen their nondiscrimination policies. Despite potential obstacles, those groups seem pleased with the progress that the AI system and associated steps represent.

“We ‘like’ Facebook for following up on its commitment to combatting discriminatory targeting in online advertisements,” Wade Henderson, president and chief executive officer of the Leadership Conference on Civil and Human Rights, said in a statement on Wednesday.

And machine learning is likely to become a component in other companies’ efforts to combat discrimination, as well as perform a host of other functions. Though he notes that tech companies are “typically fairly secretive” about their plans, Gordon suggests that such projects are probably already underway at many of them.

“Facebook isn’t the only company doing this – as far as I know, all of the tech companies are considering a similar ... question,” he concludes.

But is the ability to target advertising on social media platforms really worth the trouble? Professor Webb, who also teaches at the NYU School of Business, sounds a note of caution.

“My behavior in Facebook is not an accurate representation for who I really am, how I think, and how I act – and that’s true of most people,” she writes. “We sometimes like, comment and post authentically, but more often we’re revealing just the aspirational versions of ourselves. That may ultimately not be useful for would-be advertisers.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Can AI help Facebook stop discriminatory advertising?
Read this article in
https://www.csmonitor.com/Technology/2017/0209/Can-AI-help-Facebook-stop-discriminatory-advertising
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe