A new era of uncertainty is emerging as the global pandemic impacts businesses throughout the...
Seemingly minor inconveniences can make the process of orchestrating an entire document review team challenging. Decisions about what degree of precision is appropriate for predictive coding, turning a corpus of reviewed documents into a cohesive trial outline, and communicating seamlessly with a review team about logistical hurdles can become major snags in an already challenging process. Our latest release removes common roadblocks and helps organizations seamlessly manage investigations or litigation end-to-end.
Eliminate Bias in Predictive Coding with Rigorous Statistics
Predictive coding is great for coordinating an efficient review, especially given the growing data sizes involved in even routine matters. The predictive coding system in the Everlaw platform learns from existing review decisions to predict how a review team will evaluate the remaining, unreviewed documents in a given project. It examines documents, dissects their attributes, and develops a model that predicts the relevancy of any particular document, enabling review teams to hone in on documents that are more likely to be relevant. The key is to use the right corpus of reviewed documents to teach the prediction system.
Previously, our platform generated statistics solely from the total set of reviewed holdout documents. While simpler to implement, this method risks using an unrepresentative sample of documents to measure model performance, because it relies on a set of documents that isn’t necessarily randomly generated—human biases in the form of keyword searches and review decisions may have molded and shaped it. The resulting performance data is more likely to be correspondingly skewed and inaccurate compared to a document set that human biases have not shaped. This risks the possibility that reviewers might pass over a potentially important document.
The Everlaw platform now does an even better job of giving review teams additional certainty that their model is likely to be more accurate. By selecting rigorous performance statistics, reviewers enable their predictive coding model to evaluate itself with a purely random sample of 5% of all holdout documents. This addition gives people the option to prioritize statistical defensibility or simplicity, based on a review team’s goals for leveraging predictive coding. For more detail about rigorous performance statistics, read our help article.
Helping Admins Communicate Outside Everlaw
We’ve also facilitated communication outside of the review and production process, because we know that admins frequently need to coordinate their teams outside of the platform, too. Review administrators might need to contact their team about setting up payroll details, a change in schedule, or parking location details at the review site. When case administrators need to communicate with their reviewers outside of the Everlaw platform, whether in bulk or individually, we’ve rolled out the ability to download a user and/or project list as a CSV, along with their email addresses and associated projects. This makes it easy to contact review staff about workplace or operational issues not directly related to the progression of review on any given case.
We release new features every month, so make sure you subscribe to our blog to stay updated on our latest feature developments. If you’re curious about the full list of new features we’ve released, read our support articles on Release 35.0 and 36.0.
It’s a common misconception that modern software design and development is all about the product....