Think computers are less biased than people? Think again.

|
Aly Song/Reuters
In September, government officials gathered with AI scientists, entrepreneurs at the 2018 World Artificial Intelligence Conference in Shanghai. AI-driven tech is cropping up in nearly every major industry, from policing and health care to insurance and investment banking.
  • Quick Read
  • Deep Read ( 4 Min. )

The explosion of AI-driven technology has been a boon for cash-strapped cities and towns interested in boosting services while tightening budgets. But artificially intelligent systems are built by people. And embedded in the code for these systems lie some very human limitations. Algorithms can be flawed and research has shown that human biases frequently find their way into code. In some cases, AI is being asked to make increasingly complex decisions that can significantly impact someone’s life, such as deciding if someone qualifies for Medicaid or forecasting who might commit a crime. “The problem with the government use of these systems is there is a false sense of objectivity,” says Rashida Richardson, director of policy research at New York University’s AI Now Institute. Government officials need to review AI recommendations and subject the algorithms to formal performance reviews just as they would for human decisionmakers, says Adelaide O’Brien, a researcher who studies government digital transformation strategies. “Human oversight is critical in deploying AI,” she says.

Why We Wrote This

Artificial intelligence is often billed as the answer to biased decisionmaking. But as long as people write that code, humans will have to wrestle with their own biases.

From smart trash bins to crime forecasting, artificial intelligence is creeping into our lives in ways we might not even notice.

“Whether you are a resident involved in city programming or just a tourist traveling in a city, a lot of city programs and the ways you interact with municipalities are with AI,” says Rashida Richardson, director of policy research at AI Now Institute, a interdisciplinary research center at New York University studying the social implications of artificial intelligence.

In fact, local municipalities will increase their investment in AI-driven technology to more than $81 billion globally in 2018, according to IDC’s “Worldwide Semiannual Smart Cities Spending Guide.” Municipal spending on AI-driven technology is expected to grow to $158 billion in 2022, the study finds. This technology can range from smart trash bins that transmit a wireless signal to garbage collectors when the bins are full to real-time crime centers that provide instant information to police officers to help identify and stop emerging crime.

Why We Wrote This

Artificial intelligence is often billed as the answer to biased decisionmaking. But as long as people write that code, humans will have to wrestle with their own biases.

The explosion of AI-driven technology has been a boon for cash-strapped cities and towns interested in boosting services while tightening budgets. At its best, AI removes a degree of subjectivity from decisionmaking. But artificially intelligent systems are built by people. And embedded in the code for these systems lie some very human limitations. The issue, says Ms. Richardson, is that often the public doesn’t know what data is being used to make these decisions or even that data is driving these decisions, she says.

In some cases, AI is being asked to make increasingly complex decisions that can significantly impact someone’s life, such as deciding if someone qualifies for Medicaid or forecasting who might commit a crime. Yet, most municipalities lack the technical expertise to understand how the technology actually works or to determine if the algorithm is biased, Richardson says.

Sometimes there are inexpensive, low-tech solutions that might work better. Take the example of AI determining whether a defendant is a pretrial flight risk. If the goal is to make sure a defendant arrives for his or her court date, AI isn’t very effective in achieving that specific outcome, Richardson says.

“There are cheaper methods available, such as texting someone to remind them to appear in court or making people aware of the consequences of not appearing in court,” she says.

The data isn’t always accurate

AI’s ability to predict an outcome is only as accurate as the data it’s modeled on. An algorithm is a series of steps that have a predetermined outcome, Richardson explains. The data could have an error or flaw in it that was created by the developers, and if that flaw isn’t identified or if there isn’t an awareness of the error, then that mistake will be perpetuated each time the algorithm is used.

AI isn’t immune to cognitive biases that can sway decisions, according to a white paper by a group of scientists from the Czech Republic and Germany. Biases such as “confirmation bias” (accepting a result because it confirms a belief) or “availability bias” (giving preference to information and events that are more recent and memorable) can become part of the algorithm, the team finds. For instance, a data scientist developing the algorithm may select data that supports his or her hypothesis and disregard data that confirms an opposite conclusion.

Outcomes also can be biased if the data isn’t based on diverse experiences, says Pradeep Ravikumar, an associate professor in the machine learning department at the School of Computer Science at Carnegie Mellon University in Pittsburgh. If, for example, the AI assistant in a municipality’s office of community and human services isn’t asking question tailored to a diverse population, the outcomes could be biased, he says.

Yet, Professor Ravikumar believes that as long as data scientists developing the algorithm understand the social issues at stake and the people using the technology understand how it works, then AI has the potential to make decisions that are less biased than the decisions humans would make.

You can examine AI to see if it’s biased, he says. You can look at what drove a decision and see what needs to be changed for the technology to make a different decision.

AI requires human oversight

However, AI decisions are rarely questioned, Richardson says. “The problem with the government use of these systems is there is a false sense of objectivity,” she says.

AI systems don’t always come under the same level of scrutiny as a person would if they were making these decisions.

“Human oversight is critical in deploying AI,” says Adelaide O’Brien, research director of government digital transformation strategies for IDC Government Insights, a market intelligence firm based in Framingham, Mass.

Government officials need to review AI recommendations and subject the algorithms to formal performance reviews just as they would subject humans to a formal performance review, she says. There also needs to be a clear plan for addressing errors and perceived privacy violations, she adds.

Yet, it’s not just our local governments using AI to make decisions. Corporations and banks are using AI to decide who gets hired, who gets a loan, and whether you qualify for insurance, says Cathy O’Neil, author of “Weapons of Math Destruction,” which looks at the way big data increases inequality and threatens democracy.

“Any time we apply to jobs, our resumes and applications are fed through algorithms which filter out most applications,” Dr. O'Neil writes in an email. “The same [is true] for applications for credit cards, loans or insurance. We have no information about how these scoring systems work, whether they have the right data about us, or any way to appeal a bad score (which we don’t even hear about directly).”

Transparency is essential to preventing bias, says Jouni Harjumäki, a graduate student researcher at the University of Helsinki in Finland who is studying ways to prevent discrimination in AI use. Policymakers and legislators need to engage in this discussion as well, he says, otherwise there is no legal obligation or need for companies to be transparent about the way their algorithm makes decisions.

O’Neil agrees that AI needs to be regulated. Algorithms should be tested based on a well-defined, publicly available definition of fairness, she says. “At the end of the day these systems choose the lucky from the unlucky, and it’s a system built by the lucky.”

One way to lessen the consequences of using AI is to let the public know when the technology is being used to make decisions that can affect their ability to get a loan, qualify for health benefits, or even be eligible to post bail after an arrest. Municipal governments should create a database or public listing of the types of decision that are being made by AI to bring more public awareness to its use, Richardson recommends.

“The question,” Richardson says, “is how to mitigate bias because there is no way to prevent it.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Think computers are less biased than people? Think again.
Read this article in
https://www.csmonitor.com/Technology/2018/1003/Think-computers-are-less-biased-than-people-Think-again
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe