Why some cities and states ban facial recognition technology

Some cities are raising concerns that facial recognition technology misidentifies minorities. Others see its usefulness in law enforcement. 

Matt O'Brien/AP
A video surveillance camera hangs on a pole outside City Hall in Springfield, Massachusetts, Oct. 17, 2019. Some city councilors are pursuing a ban against government use of facial recognition technology in city surveillance cameras.

Police departments around the United States are asking citizens to trust them to use facial recognition software as another handy tool in their crime-fighting toolbox. But some lawmakers – and even some technology giants – are hitting the brakes.

Are fears of an all-seeing, artificially intelligent security apparatus overblown? Not if you look at China, where advancements in computer vision applied to vast networks of street cameras have enabled authorities to track members of ethnic minority groups for signs of subversive behavior.

American police officials and their video surveillance industry partners contend that won't happen here. They are pushing back against a movement by cities, states, and federal legislators to ban or curtail the technology's use. And the efforts aren't confined to typical bastions of liberal activism that enacted bans this year: San Francisco, Oakland, Berkeley and the Boston suburbs of Somerville and Brookline.

Take the western Massachusetts city of Springfield, a former manufacturing hub where a majority of the 155,000 residents are Latino or black, and where police brutality and misconduct lawsuits have cost the city millions of dollars. Springfield police say they have no plans to deploy facial recognition systems, but some city councilors are moving to block any future government use of the technology anyway.

At an October hearing on the subject, Springfield City Councilor Orlando Ramos said he doesn't want to take any chances. "It would only lead to more racial discrimination and racial profiling," he said, citing studies that found higher error rates for facial recognition software used to identify women and people with darker skin tones.

"I'm a black woman and I'm dark," another Springfield councilor, Tracye Whitfield, told the city's police commissioner, Cheryl Clapprood, who is white. "I cannot approve something that's going to target me more than it will target you."

Ms. Clapprood defended the technology and asked the council to trust her to pursue it carefully. "The facial recognition technology does not come along and drop a net from the sky and carry you off to prison," she said, noting that it could serve as a useful investigative tool by flagging wanted suspects.

The council hasn't yet acted, and the Springfield mayor has threatened to veto the proposal that Mr. Ramos plans to re-introduce in January.

Similar debates across the country are highlighting racial concerns and dueling interpretations of the technology's accuracy.

"I wish our leadership would look at the science and not at the hysteria," said Lancaster, California, Mayor R. Rex Parris, whose city north of Los Angeles is working to install more than 10,000 streetlight cameras Mr. Parris says could monitor known pedophiles and gang members. "There are ways to build in safeguards."

Research suggests that facial recognition systems can be accurate, at least under ideal conditions. A review of the industry's leading facial recognition algorithms by the National Institute of Standards and Technology found they were more than 99% accurate when matching high-quality head shots to a database of other frontal poses.

But trying to identify a face from a video feed – a potentially useful technique for detectives – can cause accuracy rates to plunge. NIST found that recognition accuracy could fall below 10% when using ceiling-mounted cameras commonly found in stores and government buildings.

The agency hasn't studied the performance of facial recognition on body camera footage, although experts generally believe that its often-jumpy video will render the technique even less reliable.

In October, California Gov. Gavin Newsom signed a temporary ban on police departments using facial recognition with body cameras. Some other states have similar restrictions.

While California's three-year moratorium was opposed by law enforcement groups, companies that provide video-surveillance equipment have mostly reacted with shrugs. Many businesses were already moving carefully before subjecting themselves to the legal, ethical, and publicity risks of a technology that is facing backlash from privacy, civil liberties and racial justice advocates, not to mention bipartisan concern in Congress.

Axon, which supplies body-worn cameras to most of California's big cities and is the biggest provider nationwide, had already formed an AI ethics board of outside experts that concluded facial recognition technology isn't yet reliable enough to justify its use on police cameras. False identification could lead someone to be hurt or killed, said Axon CEO Rick Smith.

Even if facial recognition software was perfectly accurate, Mr. Smith said in an interview, the ability to track people's whereabouts raises constitutional and privacy concerns. "Do we want everybody who walks near a police officer to get their face identified and logged in a database?" he said.

Microsoft last year turned down an unnamed California police agency's request to equip all police cars and body cameras with Microsoft's facial recognition software, the company's president and chief legal officer Brad Smith wrote in a new book on tech policy. He said police wanted to match a photo of anyone pulled over, even routinely, against a database of suspects for other crimes.

Mr. Smith said the technology would wrongly identify too many people, especially women and people of color. The executive has warned that unregulated facial recognition could unleash "mass surveillance on an unprecedented scale," though he's opposed to an outright ban. Microsoft in November hired an attorney to speak out against a proposed ban in Portland, Maine.

Other companies including Amazon, which markets a face identification system called Rekognition to law enforcement, have shown fewer qualms about selling their technology to police. Some law enforcement agencies feed images from video surveillance into software that can search government databases or social media for a possible match.

Todd Pastorini, general manager at biometric forensics company DataWorks Plus, said it's important to distinguish between real-time crowd surveillance – which is rare in the U.S. – and the "extremely effective" method of running images through a pool of known police mugshots or driver's license photos to help identify a suspect.

"Society and the public are going to get frustrated" if governments block law enforcement from adopting a technology that keeps improving, he said.

Among his South Carolina company's biggest face-matching clients are Detroit and New York City, the latter of which first adopted facial recognition in 2011 and also uses software from French company Idemia.

"I'd absolutely be opposed to a ban," New York City Police Commissioner James O'Neill told reporters this fall.

Mr. O'Neill, who retired in early December, added that facial recognition hits are just one part of an investigation. "There is so much video in New York City today that to not use facial recognition would be irresponsible," he said.

This story was reported by The Associated Press. 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Why some cities and states ban facial recognition technology
Read this article in
QR Code to Subscription page
Start your subscription today