skip to content

Second Circuit Faults Lawyer Using ChatGPT, Stops Short of AI Order

by Justin Smith

Less than a year ago, Tesla’s attorneys argued in court that a video of Elon Musk stating certain Tesla models had autonomous driving capabilities was a deepfake.

One month ago, Donald Trump’s former attorney Michael Cohen used AI-generated case citations as part of a bid to end his court supervision early.

And just this week, the Second Circuit Court of Appeals referred a lawyer to its attorney grievance panel for citing a non-existent state court decision she got from ChatGPT.

If generative AI is seemingly everywhere these days, its implications are perhaps having the most outsized effect in the legal sphere. And what we’re seeing is just the tip of the iceberg.

Despite concerns over the proliferation of consumer generative AI models like ChatGPT, targeted and responsible implementations of generative AI with guardrails built in has firmly established itself as a viable technology for legal professionals.

That said, for attorneys who misuse these generative AI models, courts have been quick to protect themselves, with judges implementing AI disclosure requirements and even outright bans. This could ultimately pare back the beneficial applications of the technology we see today.

ChatGPT in the Courtroom

This isn’t the first instance of ChatGPT appearing in a New York courtroom, with the now-infamous Avianca Airlines case gaining notoriety after attorneys for the defendant used fake cases from ChatGPT and presented them to the court.

While the lawyer in the Second Circuit case admitted to using a case “suggested” by ChatGPT, she said it was done without prejudice or in bad faith. “I am committed to adhering to the highest professional standards and to addressing this matter with the seriousness it deserves,” she stated.

Cases like these are likely to continue as the adoption of AI models becomes more widespread, and courts are put on the defensive.

Standing Order Roulette

With the proposed rule by the Fifth Circuit Court of Appeals outlining the need to disclose the use of generative AI in drafting documents presented for filing, and the Third and Ninth Circuits launching AI-related committees to investigate whether any action is needed, courts seem split between being proactive and letting the existing Federal Rules of Civil Procedure stand for themselves.

While the Second Circuit’s ruling is significant due to the fact that generative AI was once again used to create a fake court decision, it’s also important that the court didn’t feel an order setting standards for the use of AI was necessary, despite the reprimanded attorney asking the court to issue advice that attorneys use caution with the technology.

The ruling stated:

Indeed, several courts have recently proposed or enacted local rules or orders specifically addressing the use of artificial intelligence tools before the court. But such a rule is not necessary to inform a licensed attorney, who is a member of the bar of this Court, that she must ensure that her submissions to the Court are accurate.

This is consistent with the Federal Rules of Civil Procedure, as the court explained:

All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure… At the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely. Indeed, we can think of no other way to ensure that the arguments made based on those authorities are "warranted by existing law," Fed. R. Civ. P. 11(b)(2).

Navigating these standing orders has become something of a minefield for attorneys. Judge Paul Grimm, one of the leading voices surrounding generative AI and its effects on the law, recently spoke with Everlaw about this very issue.

“The challenge I have is that with some of those orders, they might end up being a bit all over the place,” Grimm said. “First of all, you've got, what, a thousand federal judges out there? So, is each judge going to have their own policy? Maybe some of them will follow the same one. Someone's going to tinker with it, someone else is going to change the language, and some of them are just overbroad. I think this increasing number of one-off orders creates confusion. The motivation is well-intentioned, but trying to follow through on these things and monitor them and keep track of them, if you're a lawyer that practices in many jurisdictions, can be nearly impossible.”

Moving Forward

The next steps that courts take regarding generative AI will affect legal professionals across different industries.

While cases like the one out of the Second Circuit are what make headlines, generative AI is being put to use every day by attorneys who are using it to do real, meaningful work.

As Manuel Delgado, Litigation Support Manager at Cole, Scott & Kissane, said of his firm’s use “If I have a deposition – let's say 300 pages long – it could take hours to summarize, and the AI Assistant can do it in less than a few minutes with a topic index that we can then fact-check. In complex cases, such as employee, construction and insurance work, having a tool that will get my clients ahead of the game is invaluable for the firm."

Automating tasks such as document review and foreign language translation has saved untold hours of tedious busywork. The ability to answer open-ended questions with direct citations has helped surface hard-to-find information that can take on a central role in a case. Turning prompts into first drafts has given attorneys time to focus on crafting better arguments.

This evolving technology presents an opportunity to push the law forward, and create meaningful change for all parts of the legal system. Utilizing it responsibly is essential to helping legal professionals make the most of its capabilities.