-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Machine Learning Makes its Way (Slowly) into e-Discovery

Article Featured Image

Many industries are being disrupted by machine learning, and the legal profession is no exception.

Years ago, when litigation required organizations to search their documents for relevant material, discovery was carried out by attorneys leafing through paper documents. As documents went digital, at first the process didn’t change much. Attorneys would look at documents in electronic form the same way they had looked at paper documents. But because of the ever-expanding volume of documents and the desire to limit the amount of time spent on discovery, attorneys adopted keyword search technologies. Now a process called technology-assisted review (TAR) is applying machine learning to the review of electronically stored information, with the potential to save clients time and money by avoiding the review of documents that are not relevant.

In a nutshell, here is how TAR works, according to a draft guideline document produced at Duke University Law School: “A human reviewer reviews and codes documents as ‘relevant’ or ‘nonrelevant’ and feeds this information to the software, which takes that human input and uses it to draw inferences about unreviewed documents. The software categorizes each document in the collection as relevant or nonrelevant or ranks them in order of likely relevance.”

Most of the largest law firms and many U.S. government agencies including the Department of Justice are deploying TAR or have recognized its value. Yet many attorneys and judges are still unfamiliar with TAR and unsure how to apply it.

Thomas Gricks is an e-discovery lawyer who was involved in one of the early landmark cases about using TAR in litigation. He is now managing director of professional services for Catalyst, a vendor that offers law firms and corporations a TAR solution. He said two academic studies in 2011 challenged the notion that exhaustive human review was the gold standard for performance. “Both determined that supervised machine learning techniques, where you are teaching an algorithm to help find the rest of the documents you would want, could do at least as well and sometimes better than human review,” he says.

Then two 2012 cases, Da Silva Moore and Global Aerospace, opened the door to the use of TAR in litigation. Gricks, who was involved in the Global Aerospace case, says those cases “made it clear that this is a technology you could consider and was valid in the context of responding to discovery.”

Some points of dispute

But case law about TAR is still limited, and how it should be applied is often debated. “TAR has tremendous potential to cut time and costs in litigation document review, but that potential has not been fully realized to date,” says David Cohen, chair of the Records & E-Discovery Group of the global law firm Reed Smith. “One reason for that is because of uncertainties about whether the costs and complexities of TAR and potential for disputes with opposing parties could end up eating up or exceeding any cost savings that the use of TAR might otherwise generate.”

 Another source of dispute about using TAR to find relevant documents is that there is a natural tension between requesting and producing parties. For instance, a small environmental organization going after a big corporation for polluting is going to be skeptical of how the company fine-tunes its algorithm to search for documents relevant to the issue. “Every issue has a requesting and production perspective and they may not be fully aligned, so there is a natural tendency to have tension,” Gricks says.

“While some courts and experts have opined that TAR should not be held to a higher standard than human review, experience suggests that it often is held to a higher standard,” Cohen says. “When parties propose to use TAR, its acceptance is often dependent on agreement with opposing counsel or intervention by courts with regard to the methodology, and it is often subject to validation and testing protocols that are rarely imposed on keyword and human review processes in the absence of TAR.”

The vendors who produce the machine learning software have continued to fine-tune their offerings. Gricks says the newest generation of tools are described as TAR 2.0. The real practical improvement from TAR 1.0 lies in the distinction between one-time training and continuous active learning, he adds.

In a TAR 1.0 system, you train the tool until the algorithm is no longer improving, which is usually called stabilization, he explains. Then you quit training the algorithm and you use that tool to split the document set into presumptively relevant and not relevant. TAR 2.0 uses “continuous active learning” to review and code every responsive document. Every coding decision to the last document marked positive is used to train the algorithm.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues