skip to content

Everlaw’s Generative AI Principles

by AJ Shankar

AJ Shankar Keynote Everlaw Summit

Shortly after I completed my PhD in Computer Science, I founded Everlaw with the ambition of transforming the way legal teams work. From the very start, Everlaw has employed modern AI techniques in the space of ediscovery along with the right controls and principled design. Our AI-powered language translation, added way back in 2011, not only provides incredible value to our users, but also control over usage and sending data to third parties. Our predictive coding not only delivers a simple way to train your model and use those results in review, but also provides rich performance statistics to give users confidence in the results our AI models generate. Finally, our focus on developing thoughtful user experiences led to our clustering feature, which provides a novel, transformative UX and massive scalability using cutting-edge AI algorithms.

More than a decade later I’ve never been more excited about the progress and prospects ahead. The new cohort of generative artificial intelligence offerings holds incredible promise for the legal field. Generative AI is an efficient and powerful way for legal teams to work with new technology to overcome the inefficiencies that subvert the practice of law. 

However, these new tools also bring with them significant concerns, from privacy to accuracy. These competing factors of value and risk create a healthy tension. We call this out because we want our customers to understand what we stand for in the practice of generative AI, while we deliver the most advanced legal technology platform in the world. 



Here are the key principles we are committing to in delivering responsible generative AI to our customers:

Control

Customers will be able to opt in or out of using our generative AI tools. If they choose to use these tools, any interactions with generative AI should be clearly indicated to the user.

Confidence

Generative AIs can provide immense value, but they can also make mistakes. We want to ensure that users can develop confidence in their results. To that end, where possible, we will:

  • Tailor generative AI features to specific use cases that we believe perform reliably

  • Require the AI to cite specific, immediately verifiable, passages of text from users’ evidence as justification for its responses

  • If that is not possible, ensure that users have clear access to any evidence provided as context to the AI, so that they may perform a more comprehensive validation if they so choose

Transparency, Privacy, and Security

Consistent with our strong security culture, if we use any third party AI tool, our legal and security teams will evaluate the specific system’s operational approach, the data that will be processed, and the safeguards that are in place to protect the data, and review the governing legal terms to align with our commitments to our customers. We will only select third-party AI tools with capabilities to disallow them or others from training models on our customers’ data. We will notify our customers about the third-party tools we are using. We’ll also ensure that our security obligations, such as FedRAMP, are met before offering any such tool to customers that rely on those obligations.



We believe this new generation of AI will be transformative over the long term, rather than a flash in the pan. Thus, we are more interested in being the best at generative AI than being the first. So, while we have multiple teams working on our generative AI features, they are working under the guidance of these principles and collaborating with our customers to ensure we are delivering features consistent with our long-term philosophy of acting with integrity and discipline: doing the right thing and doing it the right way.

Our goal in publishing these principles is to let you know the thought and rigor that is going into our generative AI approach. I am a generative AI optimist and believe that with the right implementation, the benefits of this technology for legal professionals will be significant. Everlaw aims to be the legal industry's most trusted AI platform, and we'll work every day to earn that trust.