How technology tramples on freedom
Rapid advances in biometric technology mean the public is surveilled – and their movements recorded – more than ever before. If this technology spreads without limits, it could soon impinge on basic rights.
—Through our governments, we face profound, long-term policy choices with respect to biometric data – both as to that data’s collection and to its use. But for these to actually be considered choices, we have to act on these issues before technological advances make the decision for us.
According to the Merriam-Webster dictionary, the word biometrics means "the measurement and analysis of unique physical or behavioral characteristics (as fingerprint or voice patterns) especially as a means of verifying personal identity." Or, more generally, the Oxford English Dictionary says it constitutes "the application of statistical analysis to biological data."
We assume without proof that most any observable part of the individual human body is unique when examined at fine enough detail. Therefore, technology's advance means ever-finer details become observable. What falls within the collective noun "biometrics" only grows. It is irrelevant whether expanded observability is due to data that can only now be captured but was heretofore unreachable, or that data already long subject to capture can only now be processed with efficiency. Either way, all biometry is personally identifiable information (PII). As with all data, the conservative presumption is that once collected it stays collected – assured deletion is unsolved both as a technical problem and as a policy problem, and likely to remain so.
Within biometrics as we currently know them, data acquisition happens through contact or noncontact. Contact, such as putting your finger on a stamp pad and then making a fingerprint, came first. Noncontact data, such as an iris scan, is more recent. With contact forms, such as a cheek swab for DNA analysis, technologic advance can be gauged by improvements over time in processing speed and precision. With noncontact forms, such as facial recognition within a distant crowd, technologic advance can be gauged by improvements over time in the standoff distance at which required accuracy can be obtained.
We also assume without proof that a biometric is unmodifiable, that a well-drawn biometric is intrinsic. (We dismiss, for now, edge cases such as thorough-going gene therapy and/or face transplants.) Intrinsicness matters in that a biometric detail, once leaked, is never again secret as the biometric detail does not change. (The loss of several million fingerprint files in the Office of Personnel Management breach illustrates the point.)
And we assume without proof that biometric systems have or will move from conveniences into infrastructure. Where today's deployed systems largely use biometrics to verify a claim of identity, as technologies (and therefore observability) advance, identity ascertainment will cease to be an assertion ("My name is Dan") followed by some sort of verification, but will be merely a direct observable fact ("Sensors say that this is Dan").
That noncontact biometry has implications relevant to security and privacy. Before proceeding, both the terms "security" and "privacy" need definition if only to make clear the biases of the present author. First, a state of security is the absence of unmitigable surprise. Second, privacy is the power to selectively reveal yourself to the world; ergo, a state of privacy is the effective capacity to misrepresent yourself.
With respect to security, one expects a security technology either to curtail surprises or to abet the mitigation of what surprises do happen. A biometric system can substantially curtail surprises involving false claims of identity. Because unnoticed (and unchallenged) false claims of identity lead to many other surprises, direct biometric identification, rather than an assertion of identity followed by some verification, has security value – but only if well-constructed. The details of good construction are not in scope for this essay except for one: The biometric identifier must be collected in a noncompromised setting and stored in a manner that does not permit mass breach events. A noncompromised setting all but surely means offline capture of the reference biometric via a trusted device followed by local processing on that device of whatever raw biometric was captured while otherwise offline. Only derived (mathematically modified) biometrics can be allowed to leave the device with which they were collected. In no case should raw biometric data be centrally stored in an online system (as the OPM episode demonstrated so well).
But the above argument runs headlong into a different technologic trend: The rapidly increasing number of points of observation, e.g., automobile entry by way of iris scans. As the number of points of observation increases, it will be entirely natural for their makers to want some form of consolidation so that each device does not have to be trained separately offline. Thus, in full circle, a central store of biometric data again looks desirable. Whereas a failure of biometric identification on a single device is a surprise for which mitigation is feasible, the exposure or modification of such a central store has no obvious mitigation; once one record in it is shown to have been modified by unknown methods or parties, there can be no succinct scoping of how many records must be re-initialized from scratch. That's particularly problematic if contact with the parties involved ordinarily involves those parties confirming their identity with the selfsame biometric now under suspicion.
The conundrum of how to avoid unmitigatable surprises underscores a societal change well under way, namely the public's demonstrated willingness to trade (the risks of) data retention for convenience (always) and security (much of the time). Biometric identification is certainly a discussable example of that societal change. Perhaps what heretofore we have known as confidentiality is becoming quaint. And irrelevant. Perhaps policymakers will have to reposition confidentiality within some new paradigm that prioritizes a right to integrity over a right to confidentiality, particularly as points of observation for biometric data proliferate. That proliferation coupled with increasing standoff distances at which data can be collected are likely to soon make the majority of biometric observation not a choice on the part of the individual. Biometrics thus eclipse the principal paradigm of privacy, the right to selective revelation.
Perhaps a world in which data can and will be collected irrespective of user permission is a world in which the data had better be right. If more and more intelligent, robotic actors are out there doing our implicit bidding long after we've forgotten their details, then data integrity had better be as absolute as we can make it, and that is where the policy puzzles will be found. Indeed, if we are to have all-electronic health records and regular monitoring by everything from our toilet to breathalyzers in our cars, all the while the the majority of medicines transition to being genomically personalized, we had better be sure that data integrity is paradigmatic. The longstanding triad of confidentiality, integrity, and availability may now be contracted to integrity and availability. Biometrics, broadly defined, drives this changeover like nothing else.
In the (defensive) security world, a majority of current new company formations involve what is called behavioral analysis. If one defines biometrics to include user behavior and not just immutable physicality, then what is to be discussed under the category "biometrics" is wider still. For the purpose of this essay, we will not do so except to say this: the driver for both the narrower and the wider definition of biometrics is that of observability. As a leading edge example, DNA analysis of a cheek swab can now be done by anyone capable of operating a smartphone in 90 minutes for $150.
Where today various policies, well intentioned and generally effective, mandate so-called two-factor authentication, the menu of factors from which two can be chosen is expanding and will continue to expand, again as the number of uniquely identifying, measurable characteristics grows. But the majority of that growth in factors, which is to say modalities of identification, are biometric, hence compliance with two-factor mandates will trend toward pairings of two (or more) different biometric markers and away from possession of some device or retention of some memorizable fact (such as a password).
This surely has various not-yet-identified side effects, but one that we do know already is that of throwing search and seizure for a bit of a loop. Attorney Marcia Hofmann succinctly explained the findings in the 2004 case of Hiibel v. Sixth Judicial District Court as follows:
For the [Fifth Amendment] to apply, however, the government must try to compel a person to make a "testimonial" statement that would tend to incriminate him or her... [A] communication is "testimonial" only when it reveals the contents of your mind. We can't invoke the privilege against self-incrimination to prevent the government from collecting biometrics like fingerprints, DNA samples, or voice exemplars. Why? Because the courts have decided that this evidence doesn't reveal anything you know. It's not testimonial.
Ten years later, in Commonwealth of Virginia v. Baust, a court did rule in just that way – that there is no Fifth Amendment issue in compelling the production of a biometric. None of that would be of any broad notice were it not for the simultaneous expansion of biometric observability and systems integration of those biometrics. What is perhaps a harbinger of the future if not a challenge to the meaning of the very word biometric, consider the insertion of chips under the skin of willing parties so that the scope and power of observability of those willing parties are enhanced by the positive action of those parties, per se.
Big data from sensors in everything will soon (be able to) transform insurance into something personalized rather than an agreement between all policy holders to do identity-blind risk pooling. Corporate wellness programs are likely to lead the way as they now have Obamacare solidly encouraging them to move to personalized health insurance pricing and benefits. This extends the notion of biometrics to status and outcome measures, not merely confirmation of (identity) assertions. But even when biometrics are limited to the confirmation of identity, it remains to be seen whether noncontact biometrics with increasing standoff capabilities are, technically if not operationally, a looming defeat for witness protection and victim hiding programs.
Lest one suggest that serious (long-range standoff) biometrics are in some inherent sense a search in Fourth Amendment terms, one need only review Kyllo v. United States (2001) in which the Supreme Court did rule that the evidence from stand-off imaging technology constituted a search but solely because that technology was not then in general public use. To cite Kyllo is to say that were a technology to enter general public use, then the Fourth Amendment's limitations on its use would, necessarily, fall away. One need only then note that with the technology to do facial recognition now coming into the price range of private citizens – where drone technology has been for some time now – that capture of biometric information and its use will soon have no limitation whatsoever.
Because of the near certainty of deep integration of biometric identification into digital systems, other forms of identification are likely to become less and less available or obsolete as alternative paths to identification. As such, the operational status of the biometric subsystem will become a critical necessity for overall functioning of the broader digital system in which the biometric subsystem is embedded. As said above, the temptation to handle criticality and scale is to centralize, in this case to centralize some or all of the biometric identification function. As any centralized system that is network connected is at risk of a distributed denial of service (DDoS) attack, the choice to centralize the biometric subsystem will create a target for DDoS attacks. While some forms of cyberoffense require the resources of a nation state, DDoS does not as the recent discovery of a DDoS service based on a botnet of webcams demonstrated.
The author David Brin was the first to suggest that if you lose control over what data can be collected on you, the only freedom-preserving alternative is (to see to it) that everyone else does, too. If the government or a corporation or your neighbor can surveil you without asking, then the balance of power is preserved when you can surveil them without asking. Cryptographer Bruce Schneier countered that preserving the balance of power doesn't mean much if the effect of new information is nonlinear, that is to say if new information is the exponent in an equation, not one more factor in a linear sum. Solving that debate requires that you have a strong opinion on what data fusion means operationally to you, to others, to society. If, indeed, and as Mr. Schneier suggested, the power of data fusion is an equation where new data items are exponents, then the entity that can amass data that is bigger by a little will win the field by a lot. That small advantages can have big outcome effects is exactly what fuels this or any other arms race.
One might ask whether biometric data can be faked. The answer is, of course, yes it can. But to do so practically is difficult and that difficulty may have a similar nonlinearity as additional factors are added to the identity confirmation test. Or perhaps not; it has been some years since the MIT Media Lab demonstrated that thirty seconds of video clip of a person talking was enough raw material to synthesize a video of that person saying whatever you wanted that person to say with both a voiceprint match and what we will call perfect lip synching. As before, if the biometric measurements are offboard the measuring device, it is likely that impersonation attacks can be constructed at high fidelity.
There is more discussion to be had on the scope, scale, and implications of "biometrics," yet for the moment we will close with the logical truth that no people, no society need rules against behaviors that are impossible, but the ballistic trajectory of biometric capabilities is such that constructing prohibitory rules before something is possible has become wholly essential. Probabilistically, enumerating forbidden things must fail to anticipate some dangers hence the policy tradeoff is whether to nevertheless attempt that enumeration or to switch over to enumerating permitted things. A free society being one where "that which is not forbidden is permitted" and an unfree society being one where "that which is not permitted is forbidden," whether we can retain a free society by enumerating forbidden aspects (of biometrics) is now at question.
Dan Geer is the chief information security officer for In-Q-Tel, a not-for-profit investment firm that works to invest in technology that supports the missions of the Central Intelligence Agency and the broader US intelligence community. His history within the security industry is extensive. Geer was a key contributor to the development of the X Window System as well as the Kerberos authentication protocol while a member of the Athena Project at MIT in the 1980s. He created the first information security consulting firm on Wall Street in 1992, followed by organizing one of the first academic conferences on electronic commerce in 1995. Geer is also the past president of the USENIX Association where he earned a Lifetime Achievement Award.