Tips for Writing Effective Legal Prompts
by Zach Sousa
Lawyers don’t need to become prompt engineers to get the most out of generative AI.
Crafting effective prompts requires skills lawyers have been honing all along: logic and reasoning, attention to detail, and clear communication. The beauty of GenAI tools like EverlawAI Assistant Coding Suggestions — which evaluates documents and suggests codes for them to speed up the review determinations process — is that instructing it is much like debriefing a human colleague.
Both require context, guidance, and clear expectations about the desired outcome. Lawyers need to learn how to write effective prompts because the effectiveness of these tools hinges on the quality of the instructions they receive.
In “Best Practices for AI Prompting,” an Everlaw community webinar, legal pros learned tips for how to instruct Coding Suggestions for the best results. The program covered using precise language to describe the context and the task, iterating on prompts to improve the results, and how to combine Coding Suggestions with Predictive Coding to make document review even more efficient.
While these best practices help legal professionals expedite document review, they are also broadly applicable to instructing any LLM for legal work.
[Ed. Note: Community roundtables, AMAs, and other events are exclusive to members of the Everlaw Ediscovery Community. To join, sign up here.]
Making Descriptions Clear, Detailed, and Nuanced
A well-structured GenAI prompt follows a formula. Generally, the formula involves a clear description of your goal or your intent: what type of answer are you seeking? It also requires clear context to provide the necessary background. The instruction itself has to be specific and detailed.
For example, when generating Coding Suggestions, EverlawAI Assistant uses three types of descriptions to set the stage:
The case description, which should include a high-level summary of the main issues, key entities and their roles, and any other important context.
The coding categories, which define any technical terms and acronyms and list alternative names for entities to capture more instances.
The coding descriptions, which should describe the specific criteria for applying a code, and define any ambiguous language or subjective terms.
The quality of the coding suggestions depends directly on these descriptions. Vague prompts lead to generic or inaccurate results, while specific, detailed prompts yield more precise and relevant outcomes.
Note: Verification of GenAI outputs by legal experts is still critical. To make validation straightforward, Coding Suggestions will provide a rationale for its suggestion and, whenever possible, provide links to relevant areas of the document – to make it easier for teams to verify the results.
Improving the First Draft Through Multiple Iterations
Prompting an LLM is an iterative process. A first draft from a single prompt should never be the final product. A legal professional who is familiar with a matter is in the best position to iterate on initial prompts for the best results.
When working with Coding Suggestions, begin by testing your prompts on a small sample of documents. Continue to improve the prompts before running the suggestions across an entire dataset. Errors are not a setback if corrected. Users should take note of them, update the descriptions, and rerun the suggestions on the original sample set of documents. This verification process ensures that the updated suggestions align with the manually applied codes and that there are no regressions.
Once Coding Suggestions is regularly returning results at a desired level of accuracy, they can be applied to larger document sets to expedite the rest of the review process.
Combining Predictive Coding and Coding Suggestions for Speed
For even greater efficiency, users can set up a predictive coding model to prioritize which documents to send to Coding Suggestions.
Based on supervised learning, Predictive Coding applies rules from reviewer decisions to predict document relevance across an entire case.
By running coding suggestions only on documents that your predictive coding model has identified as highly likely to be relevant, you can focus the LLM’s efforts where they are most needed and significantly accelerate your review.
Mastering Prompt Writing for the Law
Mastering prompts doesn’t mean lawyers need to earn a computer science degree. AI prompting calls for the same precision and strategic thinking that lawyers already bring to their work. The best results come when legal expertise shapes the questions, providing the context and nuance AI alone cannot.
Clear and precise prompts help direct EverlawAI Assistant to provide suggestions for documents at a rate that exceeds the speed and accuracy of first-level human reviewers. The better the prompt, the more accurate the resulting suggestions.
Nothing proves the argument better than trying it yourself: start small, run a test project, and put these prompting strategies into practice. The more you experiment, the sharper your prompts — and your results — will become.
To learn more about Coding Suggestions and EverlawAI Assistant, request a demo today.
Zach Sousa is a Customer Marketing Manager at Everlaw. Since joining the marketing team in July of 2021, Zach has organized Everlaw customer engagement programs such as webinars and customer testimonials. Prior to Everlaw, Zach had five years of experience as a B2B marketing manager in the medical device industry.