How Dinsmore & Shohl Engineered a 98.2% Recall Rate in High-Stakes Cyber Contract Review Using Everlaw
Combining Coding Suggestions and Predictive Coding for Near-Perfect Recall
by Gina Jurva
To rapidly review over 26,000 contracts in a cyber incident response, Dinsmore combined search, EverlawAI Coding Suggestions, and Predictive Coding—improving performance at each step. That layered approach pushed recall from 92% to 98.2%.
When a cyber incident hits, contractual notice obligations are absolute. Either every required counterparty is notified or the organization faces additional exposure for breach of contract.
This reality confronted the client of national law firm Dinsmore & Shohl, who then needed to rapidly review 26,000 contracts to identify every clause requiring notification of a security incident. A manual review was impractical. Time, cost, and risk made it untenable.
The legal team needed a smarter path. Together Peter Pepiton, Ediscovery Director, and Jennifer Mitchell, health care privacy and cybersecurity partner, worked in collaboration with the client, to develop a process that was thorough, defensible, and cost effective. Dinsmore turned to Everlaw’s platform to build a layered strategy. The firm stacked three different technologies: search, generative AI, and Predictive Coding, and then validated each step to build a foundation of institutional trust.
Here is how they built a workflow that achieved a 98.2% recall rate.
Why 80% Wasn’t Good Enough
In standard litigation, recall thresholds are often negotiated by the parties. Cyber incident response operates differently.
“When you're talking about notifying counterparties to whom you have a contractual obligation… 80% doesn't really do it," Pepiton explained. "You have to be substantially north of that."
The challenge wasn’t volume alone. It was the variability of the clauses. Pepiton described it: "The language can be tough, it can be hidden, it can be vague. Three people can look at it and come to different conclusions. But we had to notify all the counterparties."
In this context, they needed the confidence that they'd caught virtually everything.
The Method: A Layered Approach
Rather than relying on a single tool, Dinsmore built what Pepiton calls "a trimodal approach". Using different Everlaw technologies, search, Coding Suggestions, and Predictive Coding, they created three sequential filters, each designed to deliver improved accuracy at every phase.
Each layer served a distinct function. Each was validated before moving to the next.
Phase One: Strategic Search
The first step was straightforward culling. Using Everlaw's search capabilities, the team eliminated drafts, duplicates, and irrelevant documents to reduce noise before applying more advanced tools.
"First we went at it with search and figured that we could get rid of lots of drafts of the contracts and just wind up with final versions," Pepiton said. That simple step eliminated approximately 20,000 contracts, according to Pepiton, reducing the reviewable population before deploying AI and Predictive Coding.
Phase Two: EverlawAI Coding Suggestions
Next the team deployed Everlaw's Coding Suggestions, a generative AI tool that automatically classifies documents based on the team’s customized criteria. Operating off a team’s specified case description, code categories, and individual code descriptions, Coding Suggestions rapidly analyzes document sets to suggest whether or not an individual document matches that criteria, outputting nuanced recommendations with written justifications for every suggestion.
Attorneys, not technologists, drafted the instructions. They defined what constituted a notification clause in the context of a cyber incident, identifying relevant phrases and provided examples of generic confidentiality language that would not be responsive.
Then, they created a training set.
"We took a sample of 120 documents and had Jen's team review them to identify which ones contain notice provisions that required us to notify and which ones didn't," Pepiton explained.
After validating Coding Suggestions’ initial performance against that ground-truth set, the team ran Coding Suggestions across the full set of final contracts. The first pass achieved approximately 92% recall; impressive by typical discovery standards, where manual review often tops out at 80% recall. For Mitchell, the goal was to push recall even higher.
The team ran Coding Suggestions two more times on the same contract set. The subsequent passes captured additional agreements requiring notification, thereby improving recall by just over one percentage point.
"Given the number of potential contracts we knew we wanted higher recall" Mitchell said. "So we went back to continue iterating. "
While 92-93% recall would satisfy most discovery obligations, it wasn't enough for incident response. With 26,000 contracts in scope, even small percentage improvements can have an outsized impact.
Phase Three: Predictive Coding as the Final Layer
The decisive step came with Predictive Coding.
By this stage, the team had something critical: a substantial set of documents analyzed by AI and validated through human review. That dataset was used to train a traditional Predictive Coding model within Everlaw.
Predictive Coding relies on machine learning to identify patterns in prior coding decisions, then scores documents based on how similar they are to the training examples, identifying which are more or less likely to align with past determinations. It's exceptionally good at surfacing clauses with unusual phrasing or buried placement.
The final layer provided an additional level of precision, capturing even more critical data points by building on the AI’s initial findings.
After running the model and conducting formal statistical validation, the team reported a 98.2% recall rate.
This metric provided the mathematical confidence and defensibility the situation demanded. At that point, Mitchell, as the attorney responsible for the outcome, made the call: "I think we're in a good enough spot."
The journey from 92% to 98.2% recall was the difference between already superior performance and a near-perfect outcome.
Play this video on Vimeo
How the Tools Worked Together
Each technology served a distinct purpose. Search provided broad culling. Coding Suggestions excelled at pattern recognition based on natural language. Predictive Coding captured conceptual similarities based on the initial documents identified by generative AI. Together, they created a comprehensive filter.
Pepiton uses a vivid metaphor to explain the strategy, “By using these various tools in sequence, we constructed a net with smaller and smaller holes to make sure fewer and fewer things are going to fall through."
The Critical Step: Human-in-the-Loop Validation
Pepiton’s philosophy is simple: "Validation cures all."
After each phase, the team ran validation samples. Attorneys manually reviewed sample sets of documents from both the “requires notice” and “does not require notice” groups to measure how the tools were performing. They compared the AI’s determinations against human judgment to calculate precision and recall and decide whether adjustments were necessary before moving forward. Was the AI understanding the prompt? Were the results consistent? Was it safe to proceed to the next phase, or did they need to adjust?
In the Predictive Coding phase, a final round of sampling confirmed 98.2% recall: a threshold Mitchell felt comfortable defending. The technology wasn’t operating unchecked. Every step was tested before it advanced.
Moreover, for Mitchell this structured “human-in-the-loop” approach was the crucial link, enabling advanced technology to deliver value in a matter with such high risk. “I don't know that I'll ever be comfortable if I don't have some involvement in the process,“ she said. “I don't think it's ever something that I would turn over completely to AI".
What This Actually Achieved
The immediate outcome was confidence: the client could move quickly to notify counterparties knowing obligations were met. But the benefits extended further, starting with cost.
“It saved the client a lot of money,” Mitchell said.
Cost Control and Predictability: The firm avoided the open-ended expense of manual review, replacing it with predictable investment in technology and senior oversight. By Dinsmore's estimates, a manual review would have required approximately 260 attorney hours; more than six weeks of full-time work.
Strategic Resourcing: Associates and paralegals were freed from manual document review and redeployed to higher-value work such as incident response strategy, drafting notifications, and advising on next steps.
A Reusable Blueprint: This matter was not a one-off experiment. It proved a protocol. Dinsmore now has a documented, defensible playbook for any large-scale, pattern-based review, whether for compliance audits, merger due diligence, or other investigative work. The learning curve for the next case is gone.
Client Trust Through Transparency: Transparency regarding technology use served to align the firm’s incentives with the client’s goals, fostering a deep-seated trust that the objective, certainty, was being met efficiently and defensibly.
Precision Through Process
The Dinsmore case demonstrates what advanced technology can achieve when embedded in a disciplined legal workflow. By layering Everlaw’s tools strategically and validating every step through a rigorous human-in-the-loop process, the firm turned 26,000 contracts into a 98.2% recall rate.
Ultimately, this workflow did more than find clauses; it replaced the uncertainty of manual review with the absolute confidence of a validated result—transforming a high-stakes risk into a defensible standard of certainty.
Gina Jurva is an attorney and seasoned content strategist located in Manhattan, with over 16 years of legal and risk management expertise. A former Deputy District Attorney and criminal defense lawyer, her diverse litigation skills underscore her steadfast commitment to justice, while her innovative storytelling strategies combine legal acumen with deep insight. See more articles from this author.