AI Ethics and the Law
Navigating the Legal Wild West with Professor Rebecca Delfino
by Gina Jurva
Artificial intelligence is quickly becoming part of the everyday practice of law. It can make tasks like analyzing evidence, reviewing discovery, or summarizing key documents much faster. However, AI also brings up tough questions about ethics. For example, how can lawyers be honest with the court when AI might hallucinate facts? And what happens if judges start to use AI in their decisions?
To shed light on these critical questions, Everlaw recently spoke with Professor Rebecca Delfino, Associate Professor of Law at LMU Loyola Law School. Professor Delfino is a leading expert in legal ethics and has a deep understanding of how AI is changing the legal world. She shared her thoughts during a recent Everlaw webinar, giving a clear picture of what it means to be a lawyer in this new time.
The Rise of the AI Frontier: Current Caselaw
Professor Delfino started by describing the legal landscape as a "Wild West"—a way of saying that AI is being used very quickly, sometimes in a confusing or uncontrolled way.
She highlighted that, on the litigant side, AI is already drafting motions, summarizing discovery, and aiding in case preparation.

However, Professor Delfino also pointed to more unusual uses, like AI creating statements for sentencing hearings by deceased victims. In one criminal case in Arizona, a murder victim’s "testimony" was created by AI, delivering a compelling, emotional statement in court.
This AI rendering reportedly influenced the sentencing judge, who handed down the maximum sentence to the defendant. The defense counsel did not object at the time, seemingly not grasping the full implications of what was unfolding. This case shows how fast this technology is moving, often ahead of our ability to understand its ethical and practical challenges.
Judges are also using AI. Professor Delfino recounted a case in New York where an expert witness admitted to using Copilot to help form his opinion. Intriguingly, the judge, perhaps out of curiosity or a desire to understand the technology, later used AI to answer the same question – and found the results unimpressive.
These aren’t just ideas or hypotheticals; they are real-world cases that are pushing the limits of what’s allowed and what’s ethical in court.
When AI Meets Established Rules
Using AI in the courts naturally brings up important questions. Professor Delfino shared a hypothetical where one side wants to use generative AI for efficiency, the other side objects, but the court has no clear rules about AI use. She explained that while specific AI rules are still emerging in certain jurisdictions, courts can use existing rules to stop, question, and check how AI is being used.
This approach makes sense – the foundational principles of due process and fair play still apply, even when the tools are new.
The main idea is to use the same legal wisdom that applies to situations that are different in form but not in spirit.
The Surprising Seduction of AI
One of Professor Delfino’s most insightful observations was about the "seduction" of AI. When asked what surprised her most in her research, she identified how readily people have placed faith in generative AI’s output, quickly treating it as authoritative, despite its known propensity to "hallucinate."
She pointed out that lawyers and judges sometimes trust AI's results more than junior colleagues or even their own instincts. Why? Because AI sounds confident, provides instant answers, and creates an illusion of accuracy. However, as she eloquently put it, "It’s a prediction engine, not a legal database." It does not pull from a closed universe of citable law; rather, it predicts the next most plausible response. Expertise and strict guidance are still required. This is a critical distinction that many, even experienced legal professionals, might not recognize.
Candor in the Age of AI: Verify, Verify, Verify!
This brought the discussion to the core of the ethical concern: the duty of candor to the court under American Bar Association (ABA) Model Rules 3.1 and 3.3. The now-infamous Mata v. Avianca case, where a lawyer cited non-existent ChatGPT-generated cases, serves as a prominent cautionary tale.
Professor Delfino’s message here was clear: the problem isn't using AI; it's not checking its work. AI is useful for brainstorming and outlining, and even for flagging possible cases. But it’s not the final step.
Anything given to a court or a client, especially something with a lawyer’s name on it, puts their license and reputation at risk. Double-checking citations, using trusted legal databases, and making sure arguments are based on real law, not AI’s predictions, is crucial.
If you're wondering about red flags to watch for in AI-generated legal content, Professor Delfino said, "If it sounds too good to be true, it probably is...the red flag is if you put it in and there's a clear, easy answer, that's probably not a real case.”
Another warning sign is when lawyers rely too much on AI. While it can write parts of a legal brief quickly, letting AI write the whole thing without proper review is very risky. If the content wasn’t checked and approved by the lawyer, they’re still responsible.
Guardrails and the Duty of Competence
The conversation also covered a lawyer’s duty of competence, which requires attorneys to thoroughly research facts and law and perform within the standards of the legal community. This duty applies to AI use as well. If AI produces false or hallucinated information and a lawyer then uses it, then the lawyer is failing in their duty.
Professor Delfino then raised a fascinating point about experts, especially in light of cases like Cole v. Ellison, where an AI misinformation expert gave a declaration with fake citations. In the past, lawyers assumed that expert declarations were reliable. But now, Professor Delfino suggests having clear conversations with experts about their sources, and even considering strong contracts to protect against AI-related misinformation.
This era of AI in law is certainly changing everything, offering incredible power and efficiency. But as Professor Delfino emphasized, with this power comes profound responsibility. The "Wild West" analogy isn't just about discovery; it's about setting new standards and keeping honest and competent legal practices at the core.
To watch the full webinar Candor in the Age of AI: Ethical and Practical Considerations for Lawyers and Judges with Professor Delfino, click here.

Gina Jurva is an attorney and seasoned content strategist located in Manhattan, with over 16 years of legal and risk management expertise. A former Deputy District Attorney and criminal defense lawyer, her diverse litigation skills underscore her steadfast commitment to justice, while her innovative storytelling strategies combine legal acumen with deep insight.