A strange experience for Internet users is those ads that often pop up and seem tailored to the user. The selection of the ad is based on data from prior Web activity collected by digital giants like Google. Yet the ads are sometimes just plain inaccurate about what a user wants. And there’s a lesson in that for a new trend in the court system that also relies on big data to mete out justice.
Courts in more than 20 states in the United States as well as in many other countries have begun to use analytic tools to determine the probability of a criminal being dangerous in the future. Congress is also weighing various bills to use computer-driven “risk assessment” in federal court sentencing, beyond the normal assessments by judges and clinical experts.
These actuarial tools promise to reduce the number of prisoners – which number more than 2 million in the US – by ensuring “high-risk” criminals receive lengthy sentences and “low-risk” ones are given either short sentences or rehabilitation and treatment.
While these goals are worthy, the tools themselves raise troubling issues. For sure, data about a person’s past criminal behavior is useful in setting a sentence. But the new tools also try to calculate a person’s risk of recidivism based on a host of other factors outside his or her control, such as family upbringing, gender, neighborhood, or even race or ethnicity.
In other words, an individual is no longer an individual but is also viewed as a subset of a group.
The result of such data crunching, as US Attorney General Eric Holder warned last year, can cause disparities in sentencing, such as putting more blacks or Hispanics behind bars for longer periods. The accuracy of such data-driven risk assessment also remains in doubt. And the tools violate the constitutional idea of justice being applied to only an individual for his or her criminal conduct and not pertaining to a particular class of people.
As the Supreme Court has said, guilt is personal while guilt by association is unjust.
But the worst aspect of applying such broad data about a convict based on computer-generated analysis of social aspects is that it invalidates the idea of an individual being capable of making the right choices. A 1952 Supreme Court decision described such capability this way: “It is as universal and persistent in mature systems of law as belief in freedom of the human will and a consequent ability and duty of the normal individual to choose between good and evil.”
Judges have long used the length of a sentence as an incentive for an offender to learn to live within the law. But as law professor Dawinder Sidhu of the University of New Mexico wrote in a 2014 law article, “The incentive is meaningless if it is applied to a factor that the individual has no actual or meaningful ability to change.”
Individuals must retain the ability to reject predictions about their future and embrace moral and lawful behavior.
“The individual may adopt a sense of self that is materially different than that which he or she had at the commission of the crime or at sentencing,” wrote Mr. Sidhu, who has also worked with the US Sentencing Commission. “We should want individuals to beat the odds. It should be our preference for individuals to be productive and our goal to help facilitate the development of the individual such that he or she can be part of mainstream society upon release.”
These tools claim to be “evidence based.” But both in their possible effects of unequal justice and in violating a principle of individual moral agency in the justice system, they should remain suspect. Just as a Google pop-up ad can miss its algorithmic mark, so can big data about an offender’s ability to choose a better life.
[Editor's note: An earlier version of this editorial gave an incorrect number for those incarcerated in the US.]