skip to content

Professor Daniel Linna on Chatbots, Innovation Labs, and What Law Schools Should Be Doing about Generative AI

by Justin Smith

In law schools across the country and around the world, future judges, lawyers, and leaders are passing the same exams and writing the same essays as law students have done for decades before them. They take classes on civil procedure, torts, contracts, and constitutional law.

And while these classes are foundational to creating the next generation of attorneys, at schools like the Northwestern Pritzker School of Law, students are also learning about one of the most exciting technologies the profession has ever seen: generative AI.

Professor Daniel Linna, who serves a joint appointment at Northwestern Pritzker School of Law and McCormick School of Engineering as the Director of Law and Technology Initiatives and a Senior Lecturer, is one of the leading legal scholars in the field, and he’s making sure every one of his students know that generative AI is here to stay, in courses such as AI and Legal Reasoning, Assessing AI & Computational Technologies, and Law of Artificial Intelligence and Robotics.

Professor Linna spoke with Everlaw about generative AI, the research he’s doing to help develop AI-based legal tools for under-resourced litigants, what the next generation of law students are working on, and more.

Daniel-W-Linna-Jr-social
Professor Daniel Linna

I always knew I wanted to go to law school. Starting out in tech, I really enjoyed what I was doing, but it was mainly work I did while taking some time off between undergrad and law school.

And it was interesting because I would get a lot of people who would tell me things like “Why are you working in technology if you want to go to law school? That's not going to do anything for you. You're wasting your time working in tech.”

I didn't realize it at the time, but now when I look back, I figured out that the competencies that were needed to do well in tech, not just the tech in implementing technology, but project management and process improvement, and really understanding your processes and using data and data analytics, are all things that could be leveraged to help improve legal services delivery in many ways.

Now that you’re a law professor and generative AI has become such a major part of the technological landscape, how are you beginning to teach your students about it, and what level of interest are you seeing on their part?

I've been teaching students about AI for over 10 years.

The first class I taught was actually a negotiation class at the University of Michigan, and then I also taught a quantitative methods for lawyers' class at Michigan State. I was really focused on using technology, data analytics, process improvement, and project management in my classes. Early on, I was talking a lot about rules-driven automation and using machine learning in different ways, including for ediscovery and data analytics.

The interest in my AI and Legal Reasoning class has been very high ever since I've been teaching it at Northwestern, even before generative AI tools came out. And I continue trying to get people to understand that AI is more than just generative AI. AI is more than just machine learning and deep learning. I think it's really important to think about this full complement of computational technologies, rules-driven AI, and the importance of databases of ground truth that you can use in building systems. There are so many tools, and innovation is being applied in so many ways.

There's huge interest from our students, and I really try to get them thinking about the many ways in which we can see innovation in the delivery of legal services, as far as leveraging technology and the many different types of technology, but then also thinking about the people, processes, data, and the suite of approaches to innovation or components of innovation. How do we actually do things? How do we measure quality? What is a good outcome? Why is that a good outcome? What are some of the other indicators of a good outcome in a particular space or a high-quality brief? Those are many of the things that we spend time on in my AI and Legal Reasoning class.

I also teach an innovation lab where computer science students and law students work on interdisciplinary teams to develop prototype technology, and we spend a fair bit of time learning about the technology tools. I think it's critically important to have a functional understanding of these technologies for all lawyers and law students these days, but then you also have to introduce these other disciplines and get them thinking about the other skills they're going to need as managers and leaders and counselors. Those other skills that augment innovation, particularly as the legal services delivery models evolve.

Speaking of the ‘CS+Law Innovation Lab’, what types of innovations are you seeing born out of these partnerships between computer science and law students?

I think there are two primary things that come out of that class for students.

One is learning about the different technologies and understanding how they can improve and transform the delivery of legal services.

The other is, again, thinking about people, process, data, technology, and what goes into product development. Really thinking about who's the customer. I don't think we talk about that enough in law schools. You talk about the rules of civil procedure and answering a complaint and how much time you have to do that and filing a motion for summary judgment and a standard for that, and that’s great. But really thinking about, okay, who's my client? How do I provide value to my client? How am I going to explain to them what the strategy is here, and what's worth doing? That’s important as well.

We help them learn those skills in the context of a class that's focused on developing a product and learning product development, and thinking how you'd market something, how you’d explain to someone what the problem is, and what your approach is to solving it. Those are all things that translate really well to delivering legal services.

And maybe it helps you develop skills to think about it from the perspective of “I'm the lawyer, and although I'm not selling them a startup technology, I still have to explain to them the value I provide and why they should pay a lot of money per hour for my legal services.” The students learn a lot about that on the tech side and on the law side, and they really learn how to work as part of a multidisciplinary team.

In law school, there's too little teamwork to begin with. We should do more of that in law schools, have teams of lawyers working together, managing projects, understanding who's doing what, going through the formation of a team. That's really important. But communicating across those interdisciplinary lines, understanding how to communicate well with someone who's not a lawyer, who's a computer scientist, is invaluable.

"I want to try to convince people that practically every class needs to be an AI class."

We have students who are in their third year of law school, and we have sophomores in computer science, and then we've got people who have worked in the world for several years, and now are in law school. There are a lot of different dimensions where you have diversity in these teams and you have to develop the understanding of, how do we function well as a team? How do we communicate well as a team? How do we allocate work? So that's another thing that really comes out of the class.

And then, as far as the things that we've developed, we partner with real-world organizations, legal aid organizations, law firms, corporate legal departments, and they come to us with real problems, and they give us something that they have a sense for. It's only a one-semester class, so we can't do any sort of clean sheet problem analysis, but we try to not just show up with a development project for the team. We try to say, “This is an area where we're having a problem. Here's our vision for what we could do here. Here's the challenge.” And maybe you're drafting some sort of document or something like that. You need to be able to do it more efficiently. And so the team has a lot of leeway to think about how they're going to go about solving the problems.

It has really been exciting that with some of these projects, we've made enormous progress in just 10 weeks, which is the amount of time the computer science students are with us.

We’ve done projects with the Dominican Republic Supreme Court, legal aid organizations, Rentervention here in Chicago, Lawyers Committee for Better Housing, and more.

Part of true innovation is that you're not necessarily sure what it is you're going to do to close the gap. It isn’t just implementing and writing some code to do what you know you can do. It's trying different approaches and seeing what works, and what doesn't work. Just learning to go through that discovery process. And as you start deconstructing things, sometimes you learn things are more difficult than you expected.

Would you say that law schools have a responsibility in this day and age to teach students about technology and how they’ll be able to incorporate it into their practice?

Law schools have had that obligation for a long time. We should have been doing it 10 years ago. So, absolutely, we have an obligation to students to teach them about this technology and how to use it responsibly.

There’s real uncertainty about how these technologies are going to affect the legal practice. But I think there's actually plenty of evidence to demonstrate the way technology, data, innovation, project management, and all these different things are changing legal services delivery. We have to better prepare our students for that world.

We have tools now that are coming into law school that use generative AI, Lexis, Westlaw, Casetext, Fastcase, and things like that. Microsoft tools have generative AI built into them. So we need to train our students on how to best use these tools, use them well, and use them responsibly.

When I first started teaching, I did some research on schools that claimed they offered classes on law and technology. But what does that mean? Everyone has different ideas. You might have a clinic that helps entrepreneurs with intellectual property issues. Or you might be teaching an ediscovery class. Or you have a class about the law and AI. Or you have a class like the one I teach on AI and Legal Reasoning. But it’s kind of all over the board.

And now, I want to try to convince people that practically every class needs to be an AI class. We need to be thinking about how these technologies are going to change contracts, for example. That’s one area that's very relevant to our students, thinking about drafting contracts, reviewing contracts, the form of contracts, things like that.

"For people who are so-called self-represented litigants, we need to be thinking about how we can use technology to make it easier for them."

I think very, very few schools are where they need to be on this. There's still a lot of work to be done across the board on these topics. And now, hopefully soon, we'll get to a point where we have some specialized classes about AI.

This is happening to some extent already in some areas. I think it happens less from the perspective of preparing our students to practice law, but that's been a longtime criticism of law schools, things like taking a contracts class and then in practice you don't ever see a contract. You don't ever think about how the client asked for the lawyer to negotiate and draft this contract. What does that look like? Why was it put together this way? That's not usually the way we teach contracts. There are contract drafting classes, which are bringing technology into them more and more, but I think there's still a ways to go. We need to train our students in innovation and using technology responsibly and well.

I also wanted to pivot over to the judges' side of things as well. You were recently appointed to the AI task force by the Illinois Supreme Court, and I was curious if you thought that judges have an equal responsibility to lawyers in gaining an understanding of AI, and legal technology in general. Do you see these sorts of task forces taking on an important role in helping administer that education of judges?

Judges have a really important role to play here. Some of the states that have been most successful in innovating, it's because they had judges on the Supreme Courts of those states who pushed innovation forward.

We can look at what's happening in Utah and what's happening in Arizona. In my home state, Michigan, former Chief Justice Bridget McCormack was driving forward innovation there. And I'm excited about what's happening here in Illinois and the way the Supreme Court is engaged in this. I'm also part of a task force with the State Bar of Texas, and the Texas Supreme Court is engaged in that as well.

So absolutely, judges have an obligation to think about how to use technology to improve service delivery to society and to individuals who show up in courts. This has been a big part of the conversation asking about how we use technology, and how we use it responsibly.

Part of this has come up even with just using Zoom in remote hearings. There's been reluctance among people that, well, if we just do court on Zoom and we don't have all the trappings of court with the marble columns and things like that, are we going to lose respect for courts? But the truth is that there are challenges already for courts. Every time I see data about filings in state courts, the numbers are declining. That's not because there are fewer disputes in the world, but because people are either going without or they're going to alternative fora.

It's crucial, of course, to the rule of law that people have accessible court systems, and that more and more people show up.

So, for people who are so-called self-represented litigants, we need to be thinking about how we can use technology to make it easier for them. Virtual hearings are one way, so that it's easier to take care of your business with the court, whatever that might be.

And then we also need to think more broadly in terms of how technology can be used in courts. I'm hearing more and more judges say things like, “Well, everywhere you go in the world, technology is being used to improve customer service, to improve meeting the mission and vision and the goals of whatever organization it is. And are people going to lose respect for courts if they become antiquated and aren't figuring out how to use technology in a way to better serve society, to better uphold rule of law, to make sure that people get due process in court?”

I've been really excited in some of the executive education and speaking engagements I've been doing with judges. A lot of judges, particularly state court judges who have really busy trial dockets, are excited about the possibility of how they can use these technological tools to become more efficient on the things that don't require a lot of judgment and human attention. They’re excited about the potential it has to provide them with more bandwidth to do the other things that really do require judgment and human attention, so they can make a difference in people's lives.

And going off of that, on the AI front, I'm sure you've seen some courts like the Fifth Circuit proposing these rule changes that would restrict the use of AI or require the disclosure of it. I was curious what you thought about these orders, and if you see them as maybe helping to lead a more responsible implementation of the technology, or if it might have the opposite effect and could hinder its use?

These orders are totally misguided.

What is the problem that the courts are trying to solve with these orders? I think they would say they’re concerned about hallucinations, and having people file a case that is made up. But these orders are not going to solve that problem

If you've read the transcripts of the hearings of the lawyers who've made these mistakes, multiple of these cases involve misrepresentations to the court trying to cover it up after they knew what already happened. Other sorts of misrepresentations in connection with these instances are just totally egregious improper uses of the technology.

So, I don't think putting these standing orders in place are going to deter the kind of person who's making these mistakes and is not checking the judges’ standing orders to see what they need to disclose.

There are already rules in place. There's some version of Rule 11 in every court. When you're the lawyer, and you put your name on that pleading, you know that if you've misrepresented facts, or you've misrepresented the law, that that's a huge problem. You've put your license at risk when you do that.

I don't think we need more of these orders. These orders are going to potentially chill the use of generative AI, and some of them go so far as to suggest that litigants, including self-represented litigants, cannot use AI. Some of them are so broad as to say AI, but more and more are narrowed on generative AI. Okay, so what does that mean? Does that mean if I tried to use Grammarly and I use the latest version of GrammarlyGO and it has generative AI built in that now I'm supposed to disclose that? I don't know.

This Rentervention project that we work with here in Chicago, our Innovation Lab, it uses large language models and generative AI. If someone is involved in a housing matter and they use that to draft a demand letter or to help them draft anything for the court, do they have to disclose it? I mean, I don't know. It's not so clear to me. So I don't think these orders are actually going to solve the problem. But what could solve the problem is using technology by the judge or by the courts.

"I think there are a lot of things that we can do to help people understand more proactively what their rights are, what their responsibilities are, and what kinds of things they can do to help ensure they get their security deposit back."

And so the federal courts, my understanding is for many years now they've had an agreement with Westlaw to use Quick Check where you put the briefs in. And Lexis has something like this, Casetext has something like this. You load the briefs in, it can tell you what cases were cited in both briefs or all the briefs, what cases were not cited that maybe should have been cited in the brief. There are tools for the judges to check these briefs to make sure that it's good law in these cases.

Where are the headlines about judges who grabbed a brief off their hard drive from two years ago, didn't carefully check the case law, and now are citing cases that have been overturned or are no longer good law?

I don't think we need these new rules, and I'm really troubled that courts are putting in orders suggesting that if you're a self-represented litigant, you cannot use generative AI unless it's Westlaw or Lexis. So if you're a self-represented litigant, you can't afford a lawyer, but you're supposed to get a Westlaw subscription if you want to use generative AI?

I'd love to see the ACLU and other organizations mount a challenge, because I think some of these go too far. Of course, the courts can sanction you if you file briefs that have bad law in them. A self-represented litigant got sanctioned pretty seriously in Missouri for that. And that's the court's prerogative to do that. I understand it's a problem, but I don't think it's great that we're sanctioning self-represented litigants. I'd rather we use software to try to correct concerns about cases cited and things like that. If you lie about the facts, that's a different thing.

It's kind of a Marie Antoinette moment. The court is sort of saying, “let them eat cake.”

These people can't get a lawyer, and they might not qualify for legal aid, and now we're saying they can't use generative AI tools, which could include vetted generative AI tools. I don't think it's great if people are using ChatGPT if they're going to court, but on the other hand, if you have nothing else, I just don't think it's a great look to tell people to ban this. And I think it raises some constitutional concerns with the most aggressive of these orders.

I'm glad you asked that question. I think we need more Rentervention-type tools in the world. One of the concerns I have about the boom in generative AI is that we're going to continue trying to do too much stuff with probabilistic AI tools, and I think there are actually lots of areas where we can create rules-driven systems that could be really helpful for people.

Another example is the work my colleague, Sarah Lawsky, does at Northwestern. She has done a lot of work on formalizing the tax code. She's working with a group of researchers who've created a programming language called Catala, which is designed for reasoning about the tax code. It's not AI, it's using programming language to solve legal problems. I think there are plenty of opportunities to use that sort of approach for improving legal services delivery.

Now, of course, at the end of the day, many of these systems aren’t going to be just one thing. There's very little out there that's just machine learning or just generative AI. You're using retrieval-augmented generation, which feels a little bit more like using rules to me because you're narrowing the scope of what you want the system to use. So you're going to have these systems that use a variety of different tools to deliver a solution.

Rentervention is actually an example of that because the initial instance of its use is Google Dialogflow. You describe your problem in natural language terms, and then the system drops it in a bucket of, what kind of problem actually is this? And then it leads you down a rules process to gather all the information, process that information, and tell you an outcome.

I did some research on Rentervention with my colleague at Purdue, Sabine Brunswicker, where we ran a randomized controlled trial. We created two versions of the bot, one that uses very plain language, another that uses the best in empathetic language as found through lots of social science experimentation, and then asked it a list of static frequently asked questions just to compare these different versions of the bot.

There were a lot of interesting findings, not just on the problem-solving capabilities, but also on improving the “social” capacity capabilities of the bot. I'd like to see more studies like that to see how we can improve these bots, and how we can help people develop a better bond, understand when they should be able to trust the chatbot they’re interacting with, and make it be a better user experience for people.

"I've encountered so many judges who are genuinely interested in learning about this technology and think it's important. They see the potential value of bringing AI into courts, bringing technology tools into courts through online dispute resolution, using tools to help them with different tasks that they do, making sure litigants have technological tools available to them."

I get push back that using this tech is second-class justice, that everyone should get a human lawyer. But I don't think everyone wants a human lawyer. If you’re just trying to get your security deposit back, why do you need a human lawyer? Shouldn't you be able to create a system that gives you the basic facts, and helps draft a demand letter that cites appropriate authority? Now, if you don't get a response and it doesn't work, maybe you do need a human lawyer at that point in time.

But I think there are a lot of things that we can do to help people understand more proactively what their rights are, what their responsibilities are, and what kinds of things they can do to help ensure they get their security deposit back.

The next stage of research that I'm doing with Sabine Brunswicker is looking at how we can add large language models to these systems. You can guide these systems to be very powerful in what they can produce. But what about an area like landlord-tenant law where you know the rules, and you really know what legal advice should be given to someone? Can you use prompting techniques to then use the large language model once you learn something about the person so that you can explain the rules in a way that is easier for them to understand and help lead them down the path, make it more conversational, develop trust, better serve the person who needs help?

We tend to talk at a 50,000-foot level about so many of these things. And I think we need to drill down more on what is law? What are the legal problems we're talking about? If I was back practicing as a lawyer again, and you were to talk to me about automating legal services, maybe I'm thinking, “Well, yeah, I just tried a case yesterday and I was arguing to the judge and no computer is going to do that.”

There are all sorts of other things that I did to prepare for that hearing or during discovery or after the trial, filing motions and analyzing pages of trial transcripts, that technology can help with. And there are all sorts of people issues and access to justice issues where we can do some amazing things with rules-driven systems and better-created systems. Some of these technology tools and generative AI tools can help us create those systems. There are so many different ways innovation is coming to bear on solving these problems that can help us improve the delivery of legal services and access to justice.

I wanted to switch gears a little bit to the report that you just co-authored on the potential government use of deep fakes. The report in part touched on the fact that international law scholars generally don't view cyber espionage as a violation of sovereignty. And so I was curious, as deep fakes and AI become more sophisticated and pervasive, if you see a potential need to rewrite the law to include cyber espionage and the intentionally malicious use of this technology.

So that report from the Center for Strategic & International Studies is just one project that I'm working on with deep fakes. We're working on another project with a couple of federal court judges on deep fakes in court and how the rules of federal evidence apply with someone introducing something and the other person saying it’s a deep fake, and outlining how to handle that.

My expertise is more on the technology side and thinking through these legal issues. We talk about how law is not keeping up with technology, and international law is kind of a special flavor because it's hard enough to do things within the confines of a particular nation, much less getting coordination across nations to think about how the law should apply to these things.

But also, as with the law in these other spaces, it's not quite correct when you hear people say things like, “It's the Wild West, you can do whatever you want with generative AI.” That's not true. Generative AI is just a tool, and the law has something to say about how you deploy it and what you're trying to do. We need to think through how these technologies are going to be used. I think it's tricky in the international law space, and this is like an advanced version of that, thinking about deep fakes and how the law should apply to them.

The other thing to highlight here is that lawyers really need to be aware of how this is going to change lawyering. These AI tools are going to automate or eliminate a lot of the things lawyers currently do, and I think the question is, well, what are the new things that lawyers are going to do?

I think back to some of the things I did when I was practicing, and I don't remember a matter where I didn’t wish I'd had just a little bit more time or a few more hours where I felt like I could have really done some things that would have added a lot of value for the client.

I think, fortunately, there are more and more places to go to get educated on these topics. For example, in Illinois, there's this EdCon conference every two years where all the Illinois judges go for continuing legal education. Just last month, Chief Judge Michael Chmiel and I did a session about AI for law, and we had over 100 judges there. We're going to do another session for the other judges in April, and I bet we'll have a similar kind of turnout.

We also have judicial conferences, and some judges have come to those conferences, and that's a great way to stay educated. Some of this stuff is also captured as videos and things like that. It's available on YouTube. Coursera is another great resource. Andrew Ng has a bunch of great courses on Coursera, one called AI for Everyone. There's now a compliment called Generative AI for Everyone.

It's tricky right now because things are changing so fast, and there's a lot of stuff out there. And I don't know if there's consensus around the best training materials, but I would say, going back to some of the stuff that's been around for a little while, and learning some of the foundations of AI, is a good place to start.

There are a handful of law schools that have done some programming specifically for judges. I've encountered so many judges who are genuinely interested in learning about this technology and think it's important. They see the potential value of bringing AI into courts, bringing technology tools into courts through online dispute resolution, using tools to help them with different tasks that they do, making sure litigants have technological tools available to them. I think there's been a mindset shift where people are more open. They've seen the capabilities. It's more apparent now.

Sign up for a ChatGPT account. If you can spring the $20 per month, use the plus version. Go to Claude and sign up for that. Try Google Gemini. Use the Microsoft tools that are available and just learn by using and using responsibly. You have to understand the risks about confidentiality and hallucinations and things like that, and what information you have to be careful about putting into these systems.

We're going to see the most innovation from this once organizations start encouraging people to use it. Just make sure they know enough about it to use it responsibly and use it well.