skip to content

The Birth of a New Feature: Document Reviewer QA

by Everlaw

collab-1-scaled-1

We release a lot of new features, making major additions to Everlaw nearly every month. As a result, we get asked how we come up with so many new ideas, especially in a field in which some claim all the software is the same. Well, we can’t take all the credit for the innovation: we base new features on the needs of our users. Here’s an example.

We were recently interviewing a case lead about an existing feature when she mentioned an interesting way she was using the platform—to make HR decisions. Specifically, she was looking at hundreds of documents to evaluate how good her reviewers were. Armed with this information, she could then decide whether to engage them on future cases and whether to provide additional training or direction.

To do this, the case lead would manually compare a reviewer’s original rating to how important a document turned out to be, thus gauging the accuracy of each employee. Imagine doing this for dozens of reviewers and hundreds of documents, gathering enough data to pick the strongest possible team. In an environment of continued pressure to minimize wasted spending, the prudence of this case lead’s approach becomes clear.

This analysis must not only be thorough, but also timely, if it is to help avoid double work and delays. On an IP case, for instance, subsequent ‘reviewer QA’ uncovered the fact that the reviewers did not understand the technical elements involved, unfortunately too late to prevent them from incorrectly coding thousands of documents!

These stories motivated us to engineer a way to automate this work. We wanted to remove the manual and time-intensive nature of this QA so it could catch problems sooner, saving firms and their clients time and money.

So, how’d we do? When we were testing this feature, we found an anomaly: a reviewer with a 30% error rate. We checked the calculations and looked for bugs that would result in such a low accuracy count. Then, we talked to the case lead. He confirmed that this was actually correct, and that much of the reviewer’s work had had to be redone. An early catch like this helps prevent wasted effort, saving firms time, money, and further complications.

We’re eagerly collecting more feedback on this new feature to make it as useful as possible. Right now, Dan, the feature’s engineer, is working to make it possible to export and share these statistics between case leads. As always, we would love to hear from you. Let us know if you’d like to see how this feature works or if you have a suggestion for improving it!