Judge Samuel Thumma on Generative AI, Closing the Justice Gap, and Keeping Humans in the Loop
by Justin Smith
The legal profession stands at a profound inflection point. The arrival of generative AI has moved beyond theoretical curiosity, challenging long-held standards of attorney competence and the very mechanics of ediscovery.
As a member of the Arizona Supreme Court’s AI Steering Committee and a leader within the ABA Task Force on Law and Artificial Intelligence, Judge Samuel Thumma is at the center of this transition. He views this moment not merely as a technical shift, but as a once-in-a-generation opportunity to modernize a profession that is historically slow to change.
Judge Thumma’s perspective is informed by a distinguished career that bridges the gap between traditional legal pillars and the digital future. After working as an attorney at firms like Arnold & Porter and Perkins Coie, he has served as a judge on the Arizona Court of Appeals, Division One, in Phoenix since 2012, with terms as Chief Judge and Vice Chief Judge during that time. Throughout his tenure, he has become one of the judiciary’s most prominent voices on the intersection of law and technology, consistently advocating for a human-in-the-loop approach to innovation that preserves the integrity of the social contract.
Judge Thumma sat down with Everlaw to discuss the evolving landscape of modern legal practice. Our conversation explores the shifting responsibilities of practitioners in an automated world, how the legal community can leverage new tools to address the persistent challenges of the justice gap, and more.
You originally began your academic journey in veterinary medicine and agricultural journalism before moving into the law. Looking back, how did that background shape your approach to the bench, particularly when it comes to translating complex technical issues for the public?
I grew up on a farm. We grew corn and soybeans, and had a small beef cattle herd. This was in northwest Iowa out of a little tiny town called Laurens, Iowa, maybe 1,500 people. It does have the distinction of having a Disney movie made after it called The Straight Story, which was nominated for an Academy Award. But it was a very small Midwestern town, with 64 people in my high school class.
Even in a town of our size, we always had a veterinarian, and I found myself drawn to the practice of veterinary medicine. So I went on to Iowa State University, which was sort of a logical place to study to become a veterinarian.
I ran head on into organic chemistry and zoology in my sophomore year of college, and we didn't get along very well, which led to me changing my major, and really caused me to struggle for much of the year.
I was fortunate enough to be awarded a Truman Scholarship, which at the time gave me funding for two more years of undergraduate and then two years of graduate work. Harry Truman was the last president not to go to college, and Congress set up a fund in his name that still remains.
I was also very interested in agriculture. Iowa State offers a Bachelor of Science degree in agricultural journalism, which means you take a quarter of your credits in the Journalism school, a quarter of your credits in agriculture, and then a broader set of classes outside of that.
Once I had my degree, I needed to figure out what I wanted to do. I took some policy-level agricultural economics classes at Iowa State, and thought about going to graduate school for agricultural economics, but it didn't really offer the opportunities that I wanted to have at that time. So I ended up taking a year off through a U.S. Department of Agriculture exchange program in Australia called the IFYE program, which was more of a cultural exchange than an academic one, before going to the University of Iowa's College of Law.
Without that Truman Scholarship, I would not have become a lawyer, and could not have afforded law school.
My first goal was to not flunk out. The likelihood was that after graduating, I would go back home. We had a couple of lawyers in my town who were pillars of the community and terrific lawyers, one of whom our family used until he retired, and he was just a great, great lawyer.
But then I did okay in law school and had some opportunities to do other things, and ended up clerking for a couple judicial officers. I worked in private practice in Washington, D.C. at a firm called Arnold & Porter, and was dating my now wife of nearly 34 years, who was also a law school classmate, who had summered in Atlanta and Phoenix and liked Phoenix better of the two opportunities. So, that's why I've been in Phoenix for an awfully long time and loving it, but if you told me that my second year in law school, I would have laughed at you. That was not on my radar, but it's worked out well.
You’ve advocated for judges being active managers of the discovery process, not just passive observers. What is something you’re seeing from the bench that judges can do to take a more active role in discovery?
First, I want this to be a critique of me and the things that I can do better. I'm on the appellate court now and have been for almost 15 years, and was a trial judge before that for five years on two rotations that have nothing to do with civil litigation, or largely don't. One of those was juvenile, which is a major rotation here in the Maricopa County Superior Court, for three years, and then criminal for two years, where you're a genuine trial machine. I presided over maybe 35 felony jury trials during that two year period, ranging from aggravated assault to first-degree murder.
At that time at least, the disclosure and discovery in that realm was largely dealing with things like police reports, bales of marijuana, bricks of cocaine, guns, and things like that. So not the kind of high management things that you're talking about here.
In the civil realm, however, where I did my practice, the court philosophy in the last generation or two has really changed from, "Hey, we're the courts. We're here. We're open. You guys take all the time that you need to prepare your case, and then we'll try it when you're ready," to a more active management aspect where deadlines have meanings.
I also think there's a huge access to justice component to that because, especially in civil litigation, litigants are likely paying somebody by the hour to represent them. And from the court perspective as well as the advocate's perspective, every additional day that a case remains pending, there's an added cost to it.
That can be hard to quantify for the court. With a criminal trial, for example, you really appreciate that. Every additional day that a trial is delayed and someone who has not yet been found guilty is being held in custody, that's a big, big cost.
My view is always that, from the court perspective, when disclosure and discovery works the best, it’s comparatively invisible to the court until you get to trial. And that doesn't mean deadlines shouldn't have meaning, or that you shouldn't have court management of those deadlines. You need to be a careful listener when there are unique needs for a case, when deadlines need to be extended, and when accommodations need to be made, but not every case that fits that category.
There are general standards that we hope to meet, which I consistently see on our appellate court. We try to hit our marks with respect to resolving appeals, and recognizing that the standard is justice delayed is justice denied.
We need to keep that in mind as we do our best to move cases along to get prompt and hopefully fair resolutions for the parties that appear in front of us.
Ediscovery has long been one the most expensive parts of litigation. With generative AI lowering the cost of reviewing large datasets, do you think this will lead to more cases reaching trial, or will it just create a new arms race in pre-trial motions and an attempt to indefinitely delay trial?
Let me carve out the attempt to delay part of this, because I think it has the potential to result in a lot more pretrial motion practice.
My area of expertise was civil defense work. In an era where people wanted to be trial lawyers, I was never a trial lawyer. I was a litigator. My clients had not asked to be a party to the litigation. It was joined, and if I could allow my clients to just go home, life was better. So, we did a bunch of pretrial motion work there and had some successes.
My thought was if we go to trial, one of two things is going to happen, neither of which will be good for my clients. They’re either going to have to pay me a lot of money, or pay me and the other side a lot of money. And then depending on the ultimate result at trial, there’s a whole bunch of appeal work after that.
"I'm optimistic about generative AI generally in the sense that it can help enhance access to justice and help the courts figure out how we can do things better."
So, I think generative AI absolutely has the potential to help on the trial side, but also with summarizing and collating and making sense out of disparate documents that span large amounts of time. When I was last practicing, a logical chronology was about as good as you could get, and now we're so much further beyond that. I do think there's an opportunity for more motion practice there. I think we are anecdotally seeing more motion work by self-represented litigants. And my hope is that's positive. Undoubtedly, at times it is and at times it is not, and that's where we need to fish through things.
As far as the trial rate, that's a really good question. I don't have a prediction or crystal ball on that. In my mind, if you can resolve matters early, even before they hit the courtroom and the parties are okay with that, that's better. But I just don't know, and time will tell.
You’ve written extensively about the potential for generative AI to help increase access to justice. What is something courts can do right now to leverage generative AI in this way, and how can they balance the potential for it to be misused?
Let me start with a couple of examples on how generative AI can help, particularly with self-represented parties.
One, I've been very critical of what I call dumb chatbots, which are the chatbots that were around five years ago that I always found frustrating. Courts were using them, and while they were using them for good reasons, I don't think they were very effective because for any question of any complexity, it was just a loop. You didn't really spin out of it.
I'm told by people that I trust and think highly of that these days, generative AI-driven chatbots are actually doing some good things. Nevada has been using a court-based chatbot, for example.
I'm on our AI steering committee for our state's Supreme Court, and I chair the access to justice work group for that. At our next meeting, we'll have some examples of chatbots that folks think are really helping self-represented litigants.
All the states that I know of have a process where somebody can be committed for mental health treatment when they're a danger to themselves or others, or have chronic mental health challenges. In Arizona, that commitment order can be for up to 365 days, half of which can be in custody. And while that's a civil proceeding, when you restrict someone's custody and liberty, that's a big deal.
In Indiana, those orders can only last 90 days. The Court of Appeals hears most of them. Their Supreme Court put some leadership in place to say, "How can we get those appeals to the appellate court so they can be resolved faster?"
In other words, if you have an appeal for a 90-day commitment order that's resolved six months after it was issued, and the court ends up ruling you never should have been committed in the first place, that's a pretty hollow victory. So there's a pilot program in Marion County, Indiana, where Indianapolis is located, where they have generative AI create a transcript of the hearing within hours of it concluding, so that those appeals can get processed quickly.
Judge Leanna Weissmann, who's a judge on the Indiana Court of Appeals, tells me that these days, with that pilot, they're getting commitment orders through final decision on appeal in 10 to 14 days, which is absolutely lightning fast.
And while I don't know what the results are on those appeals, I'm guessing if it's like in Arizona, a lot of those are affirmed. It’s great to get an answer so early in this 90-day commitment window, and I'm excited about that. And again, it's someone who is represented by counsel, by definition, but they're also kind of a ward of the state in that circumstance.
I'm very optimistic about both of those. And I'm optimistic about generative AI generally in the sense that it can help enhance access to justice and help the courts figure out how we can do things better, but I'm quick to concede that there's mischief that can happen.
I’m concerned about a scenario that many legal scholars have also raised. Imagine someone using generative AI to churn out housing complaints for tenants without a proper legal or factual basis. For a very small fee, they could file these en masse and hope for default judgments. This type of automated litigation would seriously twist the justice system.
Now, I don't know of that happening, but there absolutely are downside risks to generative AI and access to justice. I'm hopeful that the world and court systems can put things in place that will help self-represented litigants, but there are forces that could use this technology to harm them as well, and I worry about that.
You wrote an op-ed for The Hill in 2023 titled “We Have a Once-in-a-Lifetime Opportunity to Improve Access to Justice — Let’s Not Squander It” about how a combination of new technology and standards had the potential to improve access to justice, and how Arizona’s courts were helping lead the charge. Nearly three years removed from that piece, do you think improvements have been made? Where do you see room for improvement?
Let me put this in context. When COVID hit, a bunch of court systems shut down, which resulted in backlogs that are still haunting courts. Nobody likes backlog if they're thinking about court management and administration. And every judge that I talk to says they're busy, and we're all trying to work hard to do what we can to resolve the cases in front of us.
But Arizona's courts, thanks to the leadership of our then Chief Justice Bob Brutinel and now Chief Justice Ann Scott Timmer, were pretty open to innovation during COVID, and that was part of what I wrote about in that piece. It accelerated things by a decade or maybe more that we should have been thinking about long and hard, but, of course, we're slow to change. We're based on precedent, and we don't always want to lean forward. But we did a ton of work on remote appearances, and using technology instead of requiring somebody to show up in court.
"It used to be that if you knew the facts and the documents, you were indispensable. That's long since passed. ESI then became the thing, then TAR when you knew how to use it. And now I think it's generative AI."
To point out a couple of examples, Colorado City is in Mojave County, which is a city of about 5,000 people in the far northwest corner of Arizona on the north rim of the Grand Canyon. To get to the nearest of the three Superior Court general jurisdiction courthouses in Mojave County, residents need to drive four-and-a-half hours one way through two other states to get to the court. Maybe since time began, we should have been doing remote appearances there, but we did during COVID, and it was terrific.
In eviction cases in Maricopa County, where Phoenix is located, there are 26 or 27 justice courts where eviction actions happen. There is at least one of those that has no public transportation to it.
And unlike the counties I grew up in Iowa, which are 36 miles by 36 miles, Maricopa County is huge, 100 or more miles north-south. So failure to appear rates, when personal appearances were required in those hearings, approached 40%. Two in five tenants just didn't show, and I'm pretty sure they lost.
When we moved to allow phone and remote appearances during COVID in March of 2020, failure to appear rates went down almost immediately to about 25%, just by that one change.
A group that I co-chaired called the Plan B Workgroup ultimately made recommendations on remote and in-person hearings in Arizona for all trial court hearing types in both general and limited jurisdiction courts. The Supreme Court adopted those, then implemented them through administrative orders throughout the state. And I think they've worked fairly well, but I wanted us to find out more about whether they really were. So we did surveys of the judicial branch and members of the State Bar of Arizona in 2021. And then we repeated that survey almost verbatim in 2023. And then we just finished another round, which is what I reported out in 2025.
I have three major takeaways from those results. One is that there's a hunger for more training and assistance in how to use technology, both from the bench and the bar. And we need to listen to that. We need to provide that kind of training.
Two, there's a hunger for better equipment. While people are generally good with the advantages of remote appearances, there's a definite need for better technology to assist with those hearings.
The third thing, which I thought was interesting, was in response to one of the questions we asked which was, "What improvement should we make going forward?" One of the options was never using technology to do anything. I'm exaggerating a little bit, but let's go back to how things were in January of 2020 before all this stuff happened.
In the first survey in 2021, something like 7.2% of the respondents chose that option. In 2023 that rose to 7.8%. And in 2025, I think it went back to 7.5%. And the takeaway for me is that I will take a high single digit percentage that's consistent and stable of folks that want go back to how things were unchanged, and to have everybody go to the courthouse and everybody find parking and everybody find transportation, because that suggests to me that 92% of the people are good with the innovation that we have. Now, there are a whole bunch of permutations for that. It's far from perfect, and we do need to do more training and education, and technology is expensive. But it shows that people can change, including old judges like me.
You’ve also mentioned the “bright line test” when talking about AI, that if you wouldn't let a clerk do it, don't let AI do it. As generative AI becomes more sophisticated and prevalent in the legal profession, do you see that line or standard moving, or do you think it should be a permanent boundary for the bench?
I worry that it is moving. And just to share, I will step down from the bench at the end of August of this year. I'll do some other things with the courts after that, but my time horizon is pretty short going forward. I'm not worried that generative AI is going to replace me in no small part because of that. I think even if I had 10 years in front of me and generative AI could replace some things I'm doing, I'd be okay with that.
My clerks aren’t using generative AI unless it's behind a password-protected barrier. And I've done the same thing, not for case-based work, but for other articles that I've written. And at times, I've found a needle in a haystack, and at times, it's a big swing and a miss.
And this did not originate with me, but there needs to be a human in the loop. And more specifically, where I get to my worry, it needs to be the right human in the loop.
In our court, we've had hallucinated cases. You go to look for them, you can't find them, and they stand for propositions that are foreign to me. I've been a member of the State Bar since 1992, and I've stumbled over a lot of things, and so I know when something doesn't sound quite right that it's a tell to dig deeper. We've seen hallucinated cases that stand for propositions that absolutely exist, and that's less bad than a made-up proposition, but it's still not good because you want to check and make sure that the right authority is supporting the right proposition.
"This is a great time to be a young lawyer if you lean forward. Judges also need to understand the technology, and not just put their head in the sand and say they’re never going to use it."
But what I struggle with, and it's both an opportunity and a danger for new law grads, is if they know technology, they can become indispensable. I teach evidence as an adjunct at Arizona State University, and I always encourage students to learn as much as they can about technology.
It used to be that if you knew the facts and the documents, you were indispensable. That's long since passed. ESI then became the thing, then TAR when you knew how to use it. And now I think it's generative AI. But when you're right out of law school, it is fair that you don't have that depth of knowledge. You haven't stumbled over those things. Your ability to predict what the law should be, which is really what you do with research, is more limited.
So in that sense, just turning somebody loose and saying, "Hey, go use generative AI and prosper," that creates a big risk. And it certainly does for self-represented litigants who don't have that background in training, or don't have the mentorship, and often are working in times of personal crisis and are strangers to the law. The law is complicated and complex and uses concepts and terms that are not understood by most people, so how we wrestle through that is going to be a challenge for sure.
You recently co-authored a paper in Volume 26 of The Sedona Conference Journal titled “Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers.” Why did you and your co-authors feel the need to create these guidelines, and what are you hoping they’ll achieve?
I'll speak for myself as one voice in part of an, excluding me, all-star choir. And from my perspective, I wanted to be a part of this because of the informal feedback I've gotten when I've spoken to judicial groups about generative AI.
I'll start by asking two questions. One is, "How many people have even tried a generative AI platform?" And at most maybe about half of hands go in the air.
What I'll do is I'll say, "I'm going to promise you three things, and then we're going to do a live demo. One is that when we do this live demo, the FBI isn't going to come storming in, asking, ‘What are you guys doing? Are you using generative AI?’ Two is the security people aren't going to open the door and wonder what's going on. And third is the sun's going to rise in the east tomorrow.”
Then we’ll do a generative AI test, starting with something small like, “What's the best risotto recipe?”
I'm not looking for folks to share confidential or private information. And as soon as they get started, you can feel the blood pressure in the room decline. They understand that this is a powerful technology that has opportunities and risks, but it's also not mystical in the sense that you can't put your fingers to the keyboard and use it.
Some of that article was to get people to say, "This can be done. You can do this. You need to do it thoughtfully, but you can do it." And the second question that I've asked more recently is, "How many people have a use policy in their courts for how you can use generative AI and/or have had training before here on how to use it?" And never have I had more than one in ten hands go up in the air. We have a use policy in Arizona, as do a lot of other courts, but how often that's being followed is a different question.
That paper was to provide something that was in the mid-range. Not at 500,000 feet, because there's a lot of platitudes about generative AI that don't really help, and not at ground level because you can't do that for all the courts around the country. We wanted to get something that tried to normalize the thought that generative AI really can be used for the forces of good in judicial chambers, and to offer some things that judges might want to think about.
To close us out, do you have any advice or tips for this next generation of judges and attorneys who are going to be dealing with generative AI, potentially in their daily practice from here on out?
The number one thing I tell people is get to know it. Get to know what generative AI is and what it isn't. Understand some of the terminology. If you don't know what a large language model is, you're going to struggle with what generative AI is and can be. Understand the technology a little bit, and understand what platforms are out there.
As a court system, we work almost exclusively with text. There are also platforms that create voice messages, videos, pictures, and things like that. People should get to know what's out there. Get to know what the upside and the downside is of these things. Understand the ethical environment that we have both as lawyers and judges that you need to comply with. We can't go perambulating. We can't go walking around the scene to figure out what's going on. That's not what judges could or should be doing.
The National Center for State Courts has a sandbox where you can play around with different things, and then make a decision that's right for you. I've told people, "Look, if you learn this stuff, if you tried it, and you say, ‘Hey, that's not for me,’” I respect that.
We talked a little bit about being indispensable as a young lawyer. This is a great time to be a young lawyer if you lean forward. Judges also need to understand the technology, and not just put their head in the sand and say they’re never going to use it. I want to see what courts can do to better manage the courts.
Anecdotally, I got some data almost three years ago now from the justice courts in Tucson about appearance rates. My friend, Judge Ron Newman, sent it to me by date. Well, if you have it by date, you can go by day of the week. And it turns out your appearance rates on Thursday are about 2%, 2.5%, maybe 3% higher than they are on a Monday. Maybe we as a court should listen to that and say, "Why don't we set more of these on Thursdays?"
We can do things with data, using generative AI, and we can tease out things that will improve the court system. For example, can we reduce or maybe eliminate bias in the data that we collect? We collect a ton of data. It's imperfect, no doubt, because it's the product of humans. But over time, you can find some trends. Are we looking at the right thing, for example, on treatment courts? Is recidivism the right thing to look at for a drug treatment court? I don't know. But we can look at things through a completely new lens to try to better understand what courts are doing. And I absolutely think that's true for practitioners as well.
You've got to recognize these boundaries, no doubt. And the practical limit of how to use generative AI is bound only by the imagination of creative folks. But for the same reason that the COVID pandemic required us to look at remote appearances, this new technology, if you can look at it as judging 2.0, has the ability to help us do things differently.
Justin Smith is a Senior Content Marketing Manager at Everlaw. He focuses on the ways AI is transforming the practice of law, the future of ediscovery, and how legal teams are adapting to a rapidly changing industry. See more articles from this author.