skip to content

How to Measure Ediscovery Progress

by Everlaw

Many of us are familiar with John Wanamaker’s lament, “Half the money I spend on advertising is wasted; the trouble is, I don’t know which half.” Now the legal industry is asking the same question of ediscovery, as demand intensifies for case and firm efficiency.

There’s no shortage of advice on how to trim the fat in ediscovery and boost efficiency. While there’s plenty of upside to testing these tactics, the most effective optimizations are ones customized to your firm’s needs. As in advertising, the first step to achieving this is measurement.

If your goal is improving efficiency in your team’s litigation workflow, here are some metrics you may want to track – along with how to use them:

1) Ratio of Documents Viewed to Documents Coded

Comparing the case documents that have been viewed to the ones that have been coded can help you spot repeat work. For example, if a high percentage of documents have been viewed, but not yet coded or rated, you’ll want to identify what is stopping first pass reviewers from classifying them. This metric can therefore diagnose reviewer training gaps, poor document assignments, or ineffective searches.

2) Documents Reviewed over Time

Once review has begun, quantifying progress can help with case planning and improvements. Looking at the rate of documents reviewed over time provides an estimate for the amount of time needed to finish review. If this anticipated schedule is longer than the available time, adjustments can be made to meet deadline. The earlier in the process you know this, the more options you have for course-correction.

For a more granular view of progress, you can look at specific projects or workflows. For example, you can evaluate how second pass review is moving along by looking at the percentage of that assignment which has been completed.

3) Reviewer Efficiency

To diagnose performance, you can look at specific team members’ review rates. Seeing how many documents or pages per hour each reviewer has coded allows you to understand comparative efficiencies. This can help you adjust review assignments and even future team composition.

4) Review Accuracy

Reviewer speed is not the only thing that matters though. You can use measures of reviewers’ accuracy in conjunction with this for a full measure of effectiveness. By comparing your document ratings with those of reviewers, you can spot individuals who did not understand the training or who are racing through without moving the case forward.

5) Rating Consistency

Another way to evaluate the accuracy of review is to look at rating trends by reviewer. For instance, first pass reviewers who rate more than a fifth of documents “Hot” or those who rate almost all documents “Warm” may need additional training. You can also compare how two reviewers with similar assignments rated their documents: consistency could be a good measure of accuracy.

When you’re looking to improve your team’s or firm’s efficiency, start by looking at the available data – on documents, review, and reviewers. These metrics can enable you to spot the “waste” in your process, improving ROI.