Professor Nancy Rapoport on the Ethics of Generative AI, Innovation, and the Future of Legal Practice
by Justin Smith
Professor Nancy Rapoport has long been ahead of the curve.
At a moment when legal professionals are grappling with how generative AI will reshape everything from billing structures to client expectations, Professor Rapoport is one of the few academic voices treating this shift with both urgency and nuance.
As the Garman Turner Gordon Professor at UNLV’s William S. Boyd School of Law, she’s exploring the practical and ethical consequences of emerging technologies on legal education, law firm economics, and professional responsibility.
In this conversation, Professor Rapoport offers a candid look at the legal profession’s relationship with change, from the resistance she sees in senior leadership to the opportunities she’s helping students unlock with generative AI tools in the classroom. Her insights demonstrate that generative AI isn’t just a technological shift, but a generational and philosophical one, that law schools, firms, and courts alike will need to adapt to.

You’ve been a law professor for much of your career. I was wondering if you could speak about what drew you to academia, and talk a little bit about the work you’re currently doing at UNLV?
Let’s start with why I decided to leave the law practice.
I left big law because I realized the things I liked most were thinking about problems that were unusual, and the joy of working at big law (and there is some), is that you get really hard questions to answer, which I loved thinking about.
I also loved teaching junior associates, and I thought, “Where do I get to do that where I can also pick what I want to think about?” And the answer was academia.
I started at The Ohio State University, and had a wonderful career there. I moved up the ladder very quickly, and became an associate dean, and then I got invited to be the dean at the University of Nebraska, and the dean at the University of Houston, and then UNLV made me an offer I couldn't refuse.
They gave me the perfect teaching schedule and a great professorship, so my husband and I moved out here in '07, and you would think that being a happy law professor would be the perfect life, and let me tell you, it is. My husband calls it "the loophole in life" for a reason.
But I also missed the administrative side of things. I was in two different president's cabinets, and at one point I was the interim provost. For seven weeks, I was even the university's Chief Financial Officer. But eventually I came back to being a happy law professor again.
"The beauty of it is, computers don't get bored. They don't drift off. As long as you're managing how the searches work, I think they're more efficient, and probably better than humans alone."
I mostly do research in the intersection of ethics and lawyer behavior, or board behavior, or pop culture, and artificial intelligence. I’ve been digging into all sorts of things related to those intersections. I have a bit of a social science background, so I use some psychology and some sociology as well.
Right now, I get to work with a tremendous research assistant, who's modeling some game theory for me using AI programs. We're studying the decisions that led to some law firms fighting the recent executive orders and some law firms caving in. He's been using AI to model different scenarios for me.
And then I have another research assistant with a mathematics background, who's doing the same kind of modeling.
It's really fun, and I'm looking forward to digging into it now that the exams are in, and everything's graded, and I'm working on a couple of other projects this summer too, including writing a book for West Academic on the ethics of legal tech with my co-author Joe Tiano.
You’ve long been a keen observer of how law firms adapt (or don’t) to change. What have you noticed about how legal technology, and especially ediscovery, has reshaped the practice of law over the past decade?
One of the things I do on the side is study attorney fees, and professional fees generally. When I was a so-called “baby lawyer”, we would do things like go to warehouses and do discovery for three weeks. All we would do, for 14 hours a day, was flip through paper and figure out what was responsive and what wasn't.
And then, eons ago, we got ediscovery, and data rooms, which means now you don't have to put somebody up in a hotel and have that person travel to a distant location. You can do searches with ediscovery.
The beauty of it is, computers don't get bored. They don't drift off. As long as you're managing how the searches work, I think they're more efficient, and probably better than humans alone. Humans plus technology is a better deal, for most things.
When it comes to AI, I usually have my law students start out by using it to deconstruct and draft contracts, so they can get used to it right away.
The challenge right now is people not knowing when to start with technology, when to use technology in the middle of something, and when to interject the human in the process. We need to get a better handle on that, not just because these systems are hallucinating, or lawyers are filing hallucinated briefs, because, obviously that’s going to happen. But because sometimes it's better for brainstorming at the beginning, but sometimes you need to have your arms around it in the same way that, 30 years ago, we had to know what we were searching on Westlaw, to be able to find what we were looking for.
What do you see as the most persistent barriers to wider adoption of modern legal technologies like AI-assisted ediscovery or cloud-based review platforms?
I think it's generational, in a way, but not generational in terms of age. It's generational in terms of seniority at law firms.
That's correlated with age, obviously, but it's not young versus old. It's people who are five years from retiring that have a much different sense of what they want to learn than people who are 15 years away from retiring. For the people five years away from retiring, some of them are on the bandwagon, and it's exciting. But a lot of them are more on the side of, "Well, it's not broke, so why change it?"
It's not worth it economically for them to have to revamp how you calculate value to a client. And in mine and Joe's latest article, we're talking about why the pyramid model of big law isn't going to make much sense anymore. But that means that law firms are going to have to think about how they describe the value added.
The more sophisticated clients are going to start handing drafts to their law firm, saying, "Make it better."
It's horrible when somebody who's not law-trained does it, because then you have to unbuild the car and reassemble it. But with an inside counsel, they don't want to spend the money with armies of baby lawyers. They want to generate it in-house, and fast, and then have outside counsel with specialized training look it over. That model is going to start increasing at a much more rapid pace.
In your latest article you just mentioned, “Fighting the Hypothetical: Why Law Firms Should Rethink the Billable Hour in the Generative AI Era”, you and Joe explore generative AI’s impact on the billable hour. Why did you feel it was an important topic to explore, and how did you approach putting it together?
One of the things I love about writing with Joe is that he has a naturally inquisitive mind, and we spend time doing the what-ifs. And one day he said, "What if people realize that you can do some things in two minutes that used to take two days? What does that do to the bottom line of big law?"
Since we're both big law refugees, we started thinking about a number of things related to this question, like why did the billable hour stick around for so long? Why is it still going to be important in the future? Where is it not going to be important? How can we help law firms think about what to commoditize with AI, so that they can spend their time on the things that AI isn't yet good at?
It also gets rid of the scut work that both Joe and I hated when we were associates, which is both good and bad. I do worry about training junior associates if they don't know how to do important aspects of legal work. But we started thinking about what law firms have already mastered the commodification of certain work, and how are they translating that into their bill?
We ended up doing a bunch of interviews with people, to find out which firms are passionate about developing their own internal AI tools, which ones are using off-the-rack stuff, for good or for bad, and which ones are just not there yet, except for things that are absolutely proven.
Then we started playing around with it, and we thought, the shapes have to change, because you can't have armies of associates anymore. That's going to be too much overhead.
In that paper, you suggest that generative AI could be the “existential threat” the billable hour has long needed. What do you mean by that, and how do you envision that playing out?
It won't just disappear. For example, when you do a trial, you can't control what happens during it, so you're going to bill by the hour while it’s going on. You're going to bill by the hour in a deposition. But for things where you can control the output, rather than something where other people are controlling the output as well as you, people are going to end up being more efficient by using the right programs for the right types of tasks. And that's going to do two things. It's going to drop the prices for that. So, they better commoditize it.
The other thing it's going to do is, for the stuff that's truly first impression, where there's no one out there thinking about it yet, they're going to be able to charge more for that, because they can put their brains to it instead of assuming that everything you do in an hour is equally valuable.
Joe calls it the fallacy of the billable hour. Some of the stuff I do in a billable hour, even now, is worth my rate. Some of it is not. And I think clients are going to say, “All right, for the stuff that's not worth your rate, why am I paying you your rate? We don't pay you to redline anymore, so why are we paying you to do this?”
And that brings us to the Jevons paradox, which if you don’t know is the idea that when you make something more efficient, it tends to drop the price. It's more efficient. It's less costly. And you would think that would mean demand goes down, but sometimes when you make it more efficient and the price goes down, there's more demand for it, so you end up making more money.
Could you provide an example of something, whether it’s a workflow or something else, that firms would be able to leverage generative AI to bill more for than they currently do?
Right now, there are a lot of interesting things going on constitutionally that the same old arguments don't seem to apply to. And there’s also this decision of what to work on in this environment, which isn't as easy of a decision it was in prior administrations.
Those aren't off-the-rack types of services, so people can end up charging more for them, even though some of the work that they do will end up being moved to commoditization. I don't think the generation after me will ever have to learn how to make a table of contents, and it's just crazy to think that.
You’ve written extensively on legal ethics. How do you think generative AI changes the conversation around professional responsibility, especially when it comes to transparency, competence, and client trust?
There's a lot going on.
For example, the ABA came out with a formal opinion last year that I don't think was completely well-formed.
The Virginia State Bar came out with what I think is a much better opinion, but there are a lot of different ethics rules that matter.
The first one, obviously, is competence. In the same way that people weren’t stripping metadata off of PDFs a few years ago in an effort to see what the internal dialogues were, people have to realize that if you don't understand the basics of how this technology works, like what you feed into it, what's confidential, what's not, and what things to ask vendors about the security and the confidentiality, then you may well have failed Rule 1.1, which is duty to competency.
Then we move to Rule 1.2 and Rule 1.4.
Rule 1.2 is scope. The client gets to decide what the objective of the representation is, and the lawyer gets to decide how to achieve that objective, in consultation with the client.
One of the conversations we want people to have is to put in your engagement letter how you might use artificial intelligence, in part, to make sure the client understands what you're doing to achieve the objective, and in part because Rule 1.4 requires you to be transparent with the client. That's part of communication, so we want people to do that.
"Judges are going to start thinking to themselves, 'Why are we approving fees that reduce the return to unsecured creditors just because the lawyers want to throw 10 associates at a hearing, or 10 associates in a meeting, or one to take notes. Why would you have an associate take notes? AI does that for you. Why would we need a human to take notes?'"
And if a client says, "I don't want you to use AI at all. We think it's horrible," then you have to have the conversation with them, saying, "Do you use Microsoft Word? Do you use Grammarly?" And re-educate them as to the different types of AI. They may say, "We don't want agentic AI, at all." Fine. But they have to be educated. Clients have to be educated, and the lawyers have to be smart enough and well-versed in technology enough to be able to explain that to the client.
Then we go to Rule 1.3, which is diligence. You're supposed to keep on top of things, which is one of the advantages of generative AI, not just in terms of doing things fast, but helping you manage workflows, so that you can stay on top of everything.
Then Rule 1.5, which is fees. Joe and I made an argument in the article before this one that, at some point, clients and judges are going to say it's not ethical to charge for humans to perform certain tasks, and that you’ll need to use generative AI because it's cheaper. That's a conversation that will keep evolving.
Rule 1.6 is confidentiality. People are still filing hallucinated cases, and they're still feeding confidential information into ChatGPT. They have to understand how the vendors are using the information they feed these systems, what information to feed in, what kind of agreements you can make with vendors as to what they can learn on and what they can't.
Moving on, there’s then Rule 3, which is that you aren’t supposed to lie to the court, meaning you can't file hallucinated cases. And as of last week, attorneys are still filing hallucinated cases, so clearly, that message hasn't gotten through.
Rule 4.1 states you can't lie to non-court personnel, including your client. So again, if you make up something because the AI told you to, you've violated that rule.
Then there are the rules of supervision, including Rule 5.1, which concerns supervising lawyers, and Rule 5.3, supervising non-lawyers. This means that every company, whether it’s a law firm or a regular company, needs an AI governance plan, to let people know, you can use it for this, but you can’t use it for that. You can use it in this way, but not that way. And if attorneys don't adhere to these, they've violated their duties of supervision, which requires you to create ethical guardrails around your processes.
Rule 5.5 prohibits the unauthorized practice of law. Take Upsolve, for example, which is basically TurboTax for filing a no-asset 7 petition, but it's AI. It figures out the exemptions, it populates the petition and schedules, and a lawyer glances at it. But there’s litigation over whether that is, in the end, the unauthorized practice of law, or whether it's just the same kind of information you get from looking things up on the web. Courts are split.
All of that stuff is currently going on in ethics.
Could you speak a little bit more about the idea of judges saying it might be unethical for a firm to charge humans for certain tasks? Do you actually see that playing out, where firms basically have to use generative AI, or they might not be fulfilling their duty to their client?
I think it's a year away.
I spoke to some federal judges about it in April. I'm speaking to them again next month, and reminding them that when a firm’s client is paying their bill directly, that's just a conversation between the firm and the client. If the client wants to overpay, the only bounds is Rule 1.5, that fees have to be reasonable.
"I do not let students use generative AI on exams, but I want them to be familiar with it, because I think the lawyers who are great at using generative AI will replace the lawyers who aren't, and I want my students’ odds to go up."
But when a third party is paying the legal fees, which is what happens in bankruptcy, for example, the unsecured creditors don't get paid until the lawyers get paid.
Judges are going to start thinking to themselves, “Why are we approving fees that reduce the return to unsecured creditors just because the lawyers want to throw 10 associates at a hearing, or 10 associates in a meeting, or one to take notes. Why would you have an associate take notes? AI does that for you. Why would we need a human to take notes?”
Even if the attorney fees in bankruptcy are being paid out of a carve-out of a secured creditor's collateral, you get more bang for the buck if you're more efficient with that carve-out. So, I think it's about a year away.
With judges, particularly, are you seeing a lot of interest from them about generative AI, or how do you talk to them about it?
Federal judges are starting to be allowed to experiment with it.
The GSA takes forever to approve anything, but now they have some trial programs that I’ve been talking to them about.
One of the things that is very time-intensive for a judge, when he or she is writing an opinion, is to go back and say, “Where in the record was that proven? Where in the record did this person state that?”
Right now, you can do a word search, but it's not as helpful. Wouldn't it be nice to have artificial intelligence, where a judge types in, "Show me where they prove the first element of this complaint," and then boom, the record populates.
In the article you mentioned, we wrote about this one judge who used AI to define the term “landscaping”, and other judges are split about whether that was a good idea or a bad idea, because AI isn't evidence. But there will be a lot more uses for the court that aren’t being taken advantage of.
SALI.org is currently trying to create a unique identifier for every concept in the law. Every judge, every activity. Joe and I worked on the bankruptcy ones, and the idea is, things aren't related in a list. They're related by vectors.
It's trying to relate the law as vectors, to help people find concepts, and case law, and precedent, and drafts and documents, that are related by vectors.
One of the reasons we wanted judges involved in our project is because it’s important for them to see how connected the case law is and to be able to draw on it.
For example, I’m in Nevada. I’m not in the Southern District of New York, or in the District of Delaware, or in the District of New Jersey. And when people come out here and they say, "Well, that's the way we've always done it in the District of Delaware," I want Nevada judges to be able to say, "Actually, in five cases last month, they didn't do it that way in Delaware," because I want lawyers to have to be more transparent about why they're making certain arguments. So, I'm a huge fan of the SALI project.
The other project I want to mention is RAILS, at Duke University. Duke gathered a bunch of us to work on things like an AI governance policy, so I’m really excited about that.
How do you talk to your law students about generative AI, and the role it will play in their legal careers?
We're at a tipping point there, too.
I do not let students use generative AI on exams, but I want them to be familiar with it, because I think the lawyers who are great at using generative AI will replace the lawyers who aren't, and I want my students’ odds to go up.
I want them to show me how they use generative AI. I prefer right now that they brainstorm without it, but I love that they are creating their own study guides with it, and then they come to me saying, "This doesn't make sense." And often, the answer is, "Because the AI hallucinated." But they're using it to review for themselves. They're using it to generate ideas.
I use generative AI to draft multiple-choice questions, because I'm horrible at them. And then, human-in-the-process, I say, "That's a bad question, but I can make it better this way," and it saves me hundreds of hours in drafting multiple-choice questions.
"As people develop new generative AI systems for us to use, I still want them to remember Asimov's Three Laws of Robotics. I want them to make sure there are safeguards in place."
Joe teaches at Arizona State, and last semester he did a seminar on AI and the law, and the students took one of my contracts exam questions, and he said, "I want you to use five different systems, and have them write the answer. And then your assignment will be to evaluate the relative strengths and weaknesses of each of them."
Then at the end of it, they were able to say, “Okay, it got this right, it got that wrong, this one was better, this one was worse.” But it also reassured the law students that they weren't going to be out of jobs, because there has to be a human in the loop. There has to be.
With that said, I worry that the students are still so new to the law that they may not be able to tell wheat from chaff, and so I want them to still understand how to do it.
It's like people. For example, the last time I coded was using Fortran, which was a long time ago. So I’m not just going to wake up one day and decide, “I'm just going to create something using AI to code.” I don't know if it's right or not. I'm not a coder.
So, I don't want these students to think AI is going to solve all of their problems.
Another thing is that generative AI isn't a lyrical writer yet, and the best lawyers are lyrical writers. They have to be able to write without it, before they use it to look at holes.
How do you see the role of legal education evolving in response to generative AI? Are law schools moving fast enough to prepare students for the future of practice?
I think some schools are great. Arizona State has always been an innovator.
UNLV is a smaller school, but we now have four of us in the AI space. I want UNLV to be a leader in this.
And of course, there’s Stanford, because it's Stanford.
I think a lot of schools are not feeling the pressure of needing to think about generative AI, and I hope we steal students from those schools, who want to come to a school like ours, or Arizona State, or Stanford, and realize, "This is the next generation of lawyering. I want to learn from people who are doing that." That's my hope.
Do you think students are looking at how schools are approaching generative AI when they're applying?
Some are. Most are still looking at the rankings. I can't fix that.
But, let me give you an example. My cousin is going to law school, starting this fall. He's going to Fordham Law. He has an intellectual curiosity about him, where he really is inquisitive about things like this, and about how to prepare himself for law school in different ways.
I think that there are people who only look at the rankings, and then there are people who are only looking at the financial part of it, which I get, because who wants to graduate with a mountain of debt?
And then you have the people who say, "Who's going to best prepare me for practice?" And they don't know big law from small law from mid law, so they imprint on big law, because that's the only thing they've heard of.
Do you have anything else that you think it would be important to talk about in this interview, regarding legal tech, generative AI, or anything else that’s on your mind?
As people develop new generative AI systems for us to use, I still want them to remember Asimov's Three Laws of Robotics. I want them to make sure there are safeguards in place. You saw the news story about the generative AI that changed its own programming, so that it couldn't be stopped.
I want those Three Laws of Robotics instilled in these new systems. I think people would feel more comfortable if they’re there.
That's my other hope, is that we have people who think responsibly about this stuff.

Justin Smith is a Senior Content Marketing Manager at Everlaw. He focuses on the ways AI is transforming the practice of law, the future of ediscovery, and how legal teams are adapting to a rapidly changing industry.