AI in the courtroom: Judges enlist ChatGPT help, critics cite risks

An Indian High Court judge used AI chatbot ChatGPT to summarize case law. The use of AI chatbots in the legal system is growing, with proponents praising their potential to streamline processes while critics warn of biases and false results.

|
Richard Drew/AP
The ChatGPT app is displayed on an iPhone in New York, on May 18, 2023. Some judges are experimenting with AI chatbots to assist them in rulings, but their reliability is questionable, said several legal and tech experts.

Indian High Court judge Anoop Chitkara has ruled over thousands of cases. But when he refused bail to a man accused of assault and murder, he turned to ChatGPT to help justify his reasoning.

He is among a growing number of justices using artificial intelligence (AI) chatbots to assist them in rulings, with supporters saying the tech can streamline court processes while critics warn it risks bias and injustice.

“AI cannot replace a judge. ... However, it has immense potential as an aid in judicial processes,” said Mr. Chitkara.

“The knowledge revolution has started, and these AI platforms have in certain situations demonstrated their capabilities to instantaneously transform queries into outstanding results.”

Chatbots like ChatGPT and Google’s Bard are software applications designed to mimic human conversation in response to users’ questions.

Mr. Chitkara said he did not rely on ChatGPT to help decide his ruling in the 2020 case at the Punjab and Haryana High Court.

However, he wondered if he was relying too heavily on his own “consistent view” that allegations involving an unusually high level of cruelty should count against granting bail, and asked ChatGPT to summarize case law on the issue.

The justice ministry did not immediately respond to a request for comment.

The use of AI in the criminal justice system is growing quickly worldwide, from the popular DoNotPay chatbot lawyer mobile app to robot judges in Estonia adjudicating small claims and AI judges in Chinese courts.

In the Caribbean Colombian city of Cartagena, Judge Juan Manuel Padilla also turned to ChatGPT for help in a lawsuit in which an autistic boy’s parents were suing his health care provider for treatment costs and expenses.

“[ChatGPT] is generating text that is very reliable, very concrete, and applicable to a case in a specific way,” said Mr. Padilla.

He asked the chatbot several legal questions such as whether an autistic child is exempt from fees for therapy. He included the details in his ruling, which sided in favor of the child.

Concerns over false results

But chatbots’ reliability is questionable, said several legal and tech experts.

“Some judges are trying to find a way to make the job faster – but they don’t always know the limits or risks,” said Juan David Gutierrez, professor of public policy and data at Universidad del Rosario in Bogota, Colombia.

“ChatGPT can make up laws and rulings that don’t exist. In my view, it shouldn’t be used for anything important.”

There have been numerous examples of chatbots getting information wrong or making up plausible but incorrect answers – which have been dubbed “hallucinations” – such as inventing fictional articles and academic papers.

When ChatGPT was tested on its responses to 50 legal questions by Linklaters, a global law firm headquartered in London, legal experts found it proficient in some areas but severely lacking in others.

The AI confused sections of the Data Protection Act 2018, and failed to give complete answers on English contract law.

“If you didn’t already have a very good understanding of that area of law, it would be very hard for you to work that out,” solicitor Peter Church, an expert in data privacy at Linklaters, told the Thomson Reuters Foundation.

Use of chatbot “a disaster”

Better technology promises a way to alleviate the huge backlog that is clogging some legal systems.

India alone had more than 40 million cases pending in lower courts last year while Brazil had 26 million new lawsuits filed in 2020 alone – more than 6,000 per judge.

But AI risks oversimplifying complex problems and could raise unrealistic expectations of tech’s capabilities, Dona Mathew and Urvashi Aneja from the research collective Digital Futures Lab wrote in a recent report.

There are also concerns over privacy violations and the exploitation of judicial data for profit.

“With biased and incomplete datasets, no legal remedies and accountability safeguards ... these changes can lead to systematic harms like threats to judicial independence and stagnation of legal principles,” they wrote.

Raquel Guerrero, a lawyer for three journalists in Bolivia who were accused of posting photos of a victim of violence without their permission, expressed concerns when the court consulted ChatGPT during an online hearing in April.

Ms. Guerrero said the complainant gave permission for the photos to be shared online but later denied she had done so.

Constitutional judges asked ChatGPT about any “legitimate public interest” for journalists posting online photos of a “woman showing parts of her body” without her consent.

ChatGPT answered it was a “violation of the person’s privacy and dignity.” The judges ordered the photos to be removed from social media.

The court record said ChatGPT does not replace decisions made by jurists, but that it can be used as additional support to be able to “clarify certain concepts.”

But Ms. Guerrero said the chatbot’s use in the hearing was “arbitrary” and a “disaster.”

“It can’t be used as if it’s a calculator that takes away the obligation of judges to use reason and to apply justice and to apply it correctly,” Ms. Guerrero said, adding she is considering filing a complaint against the judges for using the chatbot.

“Obviously, ChatGPT doesn’t stop being a robot. If you ask it in the right way, it will answer what you want to hear.”

This story was reported by the Thomson Reuters Foundation. 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to AI in the courtroom: Judges enlist ChatGPT help, critics cite risks
Read this article in
https://www.csmonitor.com/USA/Justice/2023/0530/AI-in-the-courtroom-Judges-enlist-ChatGPT-help-critics-cite-risks
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe