Modern field guide to security and privacy

Opinion: It's time for an about-face on facial recognition

The breakdown in talks between advocacy groups and industry over facial recognition guidelines should alarm anyone who doesn't want to be recorded, identified, and cataloged everywhere they go.

|
Wolfgang Rattay/Reuters
An employee of Germany's Federal Printing House is scanned by the "easyPass" machine, a biometric full-ID-management border control system.

Until recently, concerns over facial recognition technologies were largely theoretical. Only a few companies could create databases of names and faces large enough to identify significant portions of the population by sight. These companies had little motivation to widely exploit this technology in invasive ways.

Unfortunately, things are changing – and fast. The tech industry appears poised to introduce new facial recognition products and does not seem to take seriously concerns over personal identification. In addition to downplaying the important role biometrics play in modern data security schemes, industry is ignoring the importance of maintaining obscurity in our day-to-day lives.      

Nine public interest groups, including the American Civil Liberties Union, the Center for Democracy and Technology, and the Electronic Frontier Foundation, recently walked away from multistakeholder talks over what should go into a voluntary code of conduct that places restrictions on facial recognition software. Through the backing of the Commerce Department, these talks have been occurring since 2014, and are an outgrowth of the blueprint agenda put forth by the Obama administration’s 2012 Consumer Bill of Rights and its more recent discussion draft of a consumer privacy protection bill.

The sticking point for privacy advocates is that tech companies and lobbyists are not in favor of a general rule (subject to exceptions) requiring that companies get consent before people’s faces are scanned and linked to an identifying name. This expectation isn’t new. Back in 2012, the Federal Trade Commission released a report on best practices that discussed a hypothetical app that could use facial recognition to identify strangers. It recommended restricting the app’s use to people who have chosen to use the service.

According to Alvaro Bedoya, executive director of the Center on Privacy & Technology at Georgetown Law School and regular participant in the talks, economic self-interest is motivating industry to take an uncompromising stance.

"I think a lot of companies see an upside in using facial recognition to serve targeted ads at people based on their age, gender, and ethnicity. Retailers are also using it to identify VIPs, known shoplifters, and other undesirables – like 'known litigious individuals,' " he said. "They have a financial interest in keeping facial recognition in an unregulated, law-free zone – or at least keeping it that way outside of Texas and Illinois. I think that these financial interests were behind industry resistance in the talks."

So far, just Texas and Illinois require disclosure and consent before companies can collect and use biometrics such as facial identifiers. Any code that is eventually created will likely demand less of companies than the law in those states, and weak restrictions can adversely influence future policy. The imprimatur of a code might convince politicians in many states that the matter is settled and minimal safeguards are appropriate.

It’s important, therefore, for the public to have a clear sense of how to assess the claims in the version of the code that ultimately gets drafted. As we see it, one question should be prioritized. Does the code carefully address the problem of diminished obscurity – the personal and social repercussions of dramatically reducing the effort and expense required to determine who someone is based on how he or she looks? If not, it isn’t oriented toward protecting the public good and should be treated accordingly.

The tech industry will be tempted to sidestep the issue of obscurity. We imagine their case for permissive and widespread use of facial recognition will rely on the fact that your name and face are the most public things about you. In the US, most people show their faces whenever they go out in public. Sure, there are exceptions: burkas, ski masks, Halloween costumes, or the occasional paper bag over a celebrity's head. But those aren't the norm. 

And when talking with others in public, people regularly say both first and last names. Of course, this doesn’t always happen. Sometimes you can chat without ever explicitly saying whom you are talking with. At other times, nicknames will do. But, still, unless the situation is unusual, nobody will bat an eye if you say, “Hi John!” or “Hello Jane!”

So, on the surface, the two main units of analysis regarding facial recognition technology – names and faces – don’t seem to be private at all, especially when compared with Social Security numbers, which people carefully guard. And, let’s be honest, folks don’t just regularly broadcast these highly personal features in face-to-face settings. Plenty of people set up public online profiles that do the same thing. There’s LinkedIn, company directories, and so many other ways to show the world what a person looks like and what name he or she goes by. 

Since faces are unique, “significantly altering a face to make it unrecognizable is difficult,” and names are distinctive, why do many people seem unconcerned about their public dissemination? The answer is simple. The norms governing our attitudes toward the name-face connection developed during time periods when it was hard to identify most strangers. Human beings have limited memories and limited exposure to others. Indeed, we’ve come to rely on the fact that we can basically hide in plain sight in public, being protected by zones of obscurity. As a result, we’ve had little reason to worry that our presence will be translated into information that can be stored long-term, as well as quickly recalled and probingly analyzed.

Ubiquitous and unrestrained facial recognition technologies wouldn’t just alter this longstanding presumption, it would shatter it entirely. In the brave new world, we’d need to presume we’re being identified everywhere (except for Texas and Illinois). As a result, two undesirable temptations would take over. We could sadly admit defeat and acquiesce to losing control of our signature picture and words. Or we would be pushed to pursue aggressive – possibly paranoid – risk management strategies.

In order for industry to try to make a persuasive case and minimize pro-privacy backlash, we further suspect it will conflate two different things: your face and the faceprint that facial recognition technologies use. Your face is not scalable. But your faceprint is; a machine can read it. Indeed, once a face is converted to data points and made machine-readable, it ceases being a public-facing part of ourselves that we voluntarily expose to others. It becomes a resource that others control.

It’s important to differentiate face from faceprint because our faceprints are similar to two things that have high privacy value: passwords and beacons.

We’re increasingly using data about our face to authenticate our identities to our smartphones and user accounts. That’s reason enough to be skeptical of widespread deployment of facial recognition technologies and the proliferation of name-face databases. It’s a data security risk.

But our faceprints, like fingerprints that are constantly on display, also can act like a beacon that leads watchers right to us, like a permanent trail of breadcrumbs that won’t wash away in the rain. This power can alter the bedrock conventions for relating to others in public. Often enough, we currently don’t remember the faces of people we sit next to in restaurants, on planes, and elsewhere. This gives us a degree of freedom to move to and fro, content that judgments about us remain snappy and ephemeral, and we retain significant power to shape what those around us know about our personal lives. To give but one example, once parishioners start attending Church because they’re worried about facial recognition outing their absences, we’ve really got to question just who is benefitting from these technologies and how.

Given the setbacks impeding the voluntary code of conduct, we remain skeptical about how industry will proceed with facial recognition technology. But pressure from the public, advocates, and lawmakers might force industry to confront the myth that showing your face in public is the same thing as being easily identifiable everywhere you go. People’s passwords and targeting beacons aren’t fair game to collect and deploy, and our faceprints deserve similar treatment.

Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology. Follow him on Twitter @EvanSelinger. 

Woodrow Hartzog is an associate professor at Samford University’s Cumberland School of Law. Follow him on Twitter @Hartzog

Editor's note: This piece was updated after publication to clarify the position of privacy groups that pulled out of talks to establish guidelines for facial recognition technology. Advocacy groups were open to allowing exceptions for proposed rules that would compel companies to obtain consent before scanning faces and linking those images to names.

 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Opinion: It's time for an about-face on facial recognition
Read this article in
https://www.csmonitor.com/World/Passcode/Passcode-Voices/2015/0622/Opinion-It-s-time-for-an-about-face-on-facial-recognition
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe