skip to content

Two Legal Experts Deliver a Masterclass Panel on AI in the Justice System

Judge Paul Grimm and Dr. Maura Grossman Discuss GenAI and the Law

by Justin Smith

“I think people are living in one of two or three universes at the present time,” Dr. Maura Grossman told the crowd gathered in the Grand Ballroom of The Palace Hotel during the final day of Everlaw Summit ‘23. “One universe is utopian, and the proliferation of generative AI is all good. This is where my computer science students live, in this world... Then universe two is the existential risk world where, oh, dear, the end is coming, we're all gonna become paper clips in a matter of time. And then there's universe three. I think I'm living in universe three.”

This metaphorical universe three, as Dr. Grossman explained, was where the benefits of AI were obvious, as well as the risks, and the focus centered on marrying the two sides to create a tool that could benefit everyone.

This idea kicked off the panel Adjudicating at the Edge of Law and Technology, where Dr. Grossman, research professor for the University of Waterloo and expert on the intersection of AI and the law, was joined in her “universe three” by former U.S. District Court Judge and current Duke Law School professor Judge Paul Grimm, as well as Everlaw’s own Strategic Discovery Advisor Chuck Kellner, to discuss AI impacts on the American legal system.

Chuck Kellner and Dr. Maura Grossman in a panel discussion on generative AI on the mainstage of Everlaw Summit '23, with Judge Paul Grimm joining via video link.
Dr. Maura Grossman (center) discusses potential impacts of generative AI on the courts with Everlaw's Chuck Kellner (left) and Judge Grimm (right, via video).

Beginning at an Inflection Point

We’re now firmly past the stage where AI systems were considered niche products that only few could understand. AI-powered facial recognition technology is already being used by law enforcement, and scrolling through the headlines of any major news outlet on a given day means you’re likely to see at least one article concerning AI.

One of the first points raised by Judge Grimm concerned a New York Times article headlined “‘A.I. Obama’ and Fake News Casters: How A.I Audio is Swarming TikTok”, in which the Times did a deep dive into how AI-generated misinformation is spreading throughout TikTok and other social media platforms.

Both Judge Grimm and Dr. Grossman questioned what happens when you look to certain sources that turn out not to be legitimate. How can you know what to trust?

What does a world in which everything must be independently verified look like?

And even if you are able to debunk information that has spread online, what if the message has already been shared so widely that any effort to disprove it is lost? That’s something that can influence nearly every part of our lives, from news to elections to the justice system itself.

“The concern is not that there can be fake evidence. There’s always been fake evidence. The concern is that it’s getting so good that it will be difficult to tell the truth from non-truth.”

Dr. Grossman emphasized this with a study concerning a group of subjects who played a card game, which was completed without issue.

After the game, the participants were shown an AI-generated video, also known as a deepfake, wherein one of the participants cheated by taking some cards out of his pocket and placing them on the table.

Upon seeing the video, over half of the participants said they would sign a sworn affidavit that the participant in question did in fact cheat, despite there being no other evidence or facts against him.

“The concern is not that there can be fake evidence,” Judge Grimm said. “There’s always been fake evidence. The concern is that it’s getting so good that it will be difficult to tell the truth from non-truth.”

AI has the power to produce audio and visual narratives with a speed and accuracy never seen before, which can have real-world implications on court decisions and the very idea of justice.

“Right now, most of our judges and our juries evaluate evidence to make decisions in civil and criminal trials,” Dr. Grossman said, “and what happens when you can no longer use your eyes and ears to do that?”

You Can’t Unring the Bell

Establishing clear standards for how AI technology can be used and presented in courtrooms is very much a pressing need, and was top of mind for the panelists.

Judge Grimm highlighted function creep, which occurs when information is used for a purpose aside from the one originally specified. “Where instead of trying to predict who might commit a new crime while waiting for trial to determine what beneficial supervisory conditions they get,” he said, “[AI] is now being used to predict the likelihood of recidivism to determine how long of a custodial sentence [the accused] should be sentenced to. The idea being that the greater the chance of recidivism, the more you have to deter that by a longer sentence.”

Judge Grimm also raised the point of the developers of these AI systems being private individuals who have determined on their own how they’re going to design, code, and train this technology.

It makes them the controller of what the system knows, including all the biases and beliefs that may not be seen as equitable or fair in the eyes of the court.

If the developer has ingrained some sort of bias into the system, how can you tell if the outputs it gives you are tainted?

Biases in AI

Dr. Grossman established a clear set of problems concerning bias in AI systems by laying out several examples. One of the first things she spoke about was her concern that if she were to prompt an AI system to produce a picture of a lawyer or an architect, it would come back with images of white men, while if she asked it for a picture of a felon, it would return images of black men. Dr. Grossman later circled back to real-life instances where people of color were wrongly arrested due to AI-powered facial recognition technology making a mistake.

She pointed to an NPR article in which a researcher prompted an AI program with phrases like “Black African doctors providing care for white suffering children”, but the program instead returned images of black children and, in some instances, even photos of white doctors. It serves as just one of numerous reported instances where AI systems have proved biased.

Many organizations working with GenAI have committed to clear principles meant to bring transparency to their tools. But adoption is far from universal.

Although AI offers the opportunity for more equity and access, it’s essential not to let it lapse into the insidious hierarchy that’s historically plagued new technologies. Bias permeates societies around the world, and while AI has the potential to correct a lot of wrongs, we need to ensure it doesn’t create new problems or continue perpetuating old ones.

Practical Next Steps

As for how to manage these systems and their impact on the legal process going forward, Judge Grimm and Dr. Grossman highlighted the need to create and adapt rules and frameworks that are actionable and realistic.

There has to be consistency across districts, with rulemakers codifying uniform guidelines that are followed and understood by every attorney and judge in the courtroom.

There needs to be a renewed commitment to the duty of technological competency for all parties, and training in place so those parties actually know what they’re doing with the technology.

The implementation of advanced technology in ediscovery has been a long and slow process, with some attorneys and judges even still refusing to work with anything that isn’t pen and paper. The same can’t be allowed to happen with AI systems.

Juries should be empowered with the ability to tell what’s real and what’s not. Judges may have to evaluate evidence and decide whether it would be unfairly prejudicial to let the jury even hear the disputed evidence in the first place, especially if it’s considered vital to the case.

And of course, there’s the potential for increased costs related to even more extensive ediscovery, which also brings in concerns over access and the unaffordability of litigation. Judges might have to take more control as administrators, setting disclosure deadlines and fair opportunities for discovery.

Firms that were early adopters of AI have already begun using it in their everyday practices, implementing systems that save younger attorneys from spending grueling hours reviewing thousands of documents, which has led to better retention rates and a more enjoyable work environment. Immediate impacts like these can also be found in the courts, with faster timelines for lawsuits and automated administrative tasks, among other benefits.

Regardless of whether you view the proliferation of AI systems as an exciting new frontier or an unregulated disaster, the evidence shows GenAI is not only here to stay, but can serve as a helpful tool when used properly. The ediscovery process can become more streamlined than ever, and hours spent performing menial tasks can be taken back. Such immense potential, coupled with the right standards and practices, can help bring lasting change to a legal system that’s desperately in need.