What is Predictive Coding and How to Do it Right (Part 2)

In part one of this series, I looked at how predictive coding differs from manual review and from search, and why it is an increasingly imperative part of ediscovery.  If you’re not convinced, let me know why in the comments.  If you are convinced, then you might wonder if all predictive coding tools are the same. The answer: not quite.

What’s the Difference?

As I mentioned, predictive coding tools use a small number of human-reviewed documents to predict classifications for the remaining non-reviewed documents.  They take data about what people have done and use it to simulate what they would do in subsequent review.  As you might expect, the better that original data, the better the predictions.

But what makes data bad?  It is not accidentally rating some documents incorrectly: that tends to be corrected when you have enough data.  Rather, it’s systematic decisions that occur time after time, which are logical to humans, but not to computers.  In other words, it’s how your team rates documents.Outlook email window

For example, what does it mean when one email in a thread is marked “important” by a reviewer, and another is marked “not important?”  It seems contradictory, since both emails are part of the same conversation. However, the latter reviewer might simply have reasoned that the last email in the thread has the full context, so the remainder can be set aside as “not important.”  This isn’t a mistake; it’s how a team may choose to do its document review.

Some predictive coding tools ignore all of the ratings on an email thread like this because they don’t know how to resolve the inconsistency. Others include all documents, seeking differences which don’t exist, in order to resolve the discrepancy.  The Everlaw approach, however, interprets the intention of the user, considering the entire thread “important” despite the fact that many of the emails within may have been demoted.

To return to the example from part one, it’s like asking your librarian for a recommendation based on your love of non-fiction history anthologies and your occasional liking of mystery novels.  If the librarian knows that you’re downplaying your love of mysteries to avoid embarrassment, her recommendations are going to be that much better.

 

Why Does It Matter?

The more accurately the predictive coding system can interpret human behavior, the more time- and cost-effective it is.  For instance, if the system ignores email threads marked as both “important” and “not,” then that leaves fewer documents on which to base predictions. So, either:

1) the system requires more documents to be reviewed to make accurate predictions, increasing the document review costs in both time and dollars or

2) the predictions are less accurate, because they’re based on an attempt to find differences between the earlier emails in the thread and the later “important” ones; thus, the tool does a worse job of identifying documents that are likely to be meaningful, requiring more hours on the part of reviewers.

 

The Takeaway

Hands on keyboardUltimately, the utility of a predictive coding tool is dictated less by the technology and more by the thinking behind it.  Algorithms need to be not only effective and efficient, but also human-centric.  As you compare predictive coding tools, ask what they do to better interpret reviewer-provided information.  Do the developers truly understand how it might be used?  How does the platform work with the processes that legal teams already have in place — rather than trying to change these processes?

We’re proud of the work we do to understand users, and we’re constantly striving to do better.  Have tips for us?  We’d love to hear them!