Da Silva Moore v. Publicis Groupe
Da Silva Moore v. Publicis Groupe
287 F.R.D. 182 (S.D.N.Y. 2012)
February 24, 2012

Peck, Andrew J.,  United States Magistrate Judge

ESI Protocol
Technology Assisted Review
Initial Disclosures
Cooperation of counsel
Download PDF
To Cite List
Summary

In this landmark opinion, the court held—for the first time—that computer-assisted review is an acceptable way to search for ESI in certain cases. Computer-assisted review is appropriate and should be regarded as a useful tool for large-data-volume cases when such review promises to secure a “just, speedy, and inexpensive determination” under FRCP 1, by ensuring discovery’s burden does not outweigh its likely benefit under FRCP 26(b)(2)(C). Computer-assisted review was appropriate in this case because defendant’s predictive coding proposal was transparent, the parties had already largely agreed to it, and the proposal promised to be a better and less costly tool than the alternatives.

Additional Decisions
MONIQUE DA SILVA MOORE, et al., Plaintiffs,
v.
PUBLICIS GROUPE & MSL GROUP, Defendants
No. 11 Civ. 1279 (ALC) (AJP)
United States District Court, S.D. New York
February 24, 2012

Counsel

Janette Wipper, Esq., Deepika Bains, Esq., Siham Nurhussein, Esq., Sanford Wittels & Heisler, LLP, San Francisco, CA, for Plaintiffs and Class.
Brett M. Anders, Esq., Victoria Woodin Chavey, Esq., Jeffrey W. Brecher, Esq., Jackson Lewis LLP, Melville, NY, for Defendant MSL Group.
Peck, Andrew J., United States Magistrate Judge

OPINION AND ORDER

To my knowledge, no reported case (federal or state) has ruled on the use of computer-assisted coding. While anecdotally it appears that some lawyers are using predictive coding technology, it also appears that many lawyers (and their *183 clients) are waiting for a judicial decision approving of computer-assisted review.
Perhaps they are looking for an opinion concluding that: “It is the opinion of this court that the use of predictive coding is a proper and acceptable means of conducting searches under the Federal Rules of Civil Procedure, and furthermore that the software provided for this purpose by [insert name of your favorite vendor] is the software of choice in this court.” If so, it will be a long wait.
....
Until there is a judicial opinion approving (or even critiquing) the use of predictive coding, counsel will just have to rely on this article as a sign of judicial approval. In my opinion, computer-assisted coding should be used in those cases where it will help “secure the just, speedy, and inexpensive” (Fed.R.Civ.P. 1) determination of cases in our e-discovery world.
Andrew Peck, Search, Forward, L. Tech. News, Oct. 2011, at 25, 29. This judicial opinion now recognizes that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.[1]
In this action, five female named plaintiffs are suing defendant Publicis Groupe, “one of the world's ‘big four’ advertising conglomerates,” and its United States public relations subsidiary, defendant MSL Group. (See Dkt. No. 4: Am. Compl. ¶¶ 1, 5, 26–32.) Plaintiffs allege that defendants have a “glass ceiling” that limits women to entry level positions, and that there is “systemic, company-wide gender discrimination against female PR employees like Plaintiffs.” (Am.Compl. ¶¶ 4–6, 8.) Plaintiffs allege that the gender discrimination includes
(a) paying Plaintiffs and other female PR employees less than similarly-situated male employees; (b) failing to promote or advance Plaintiffs and other female PR employees at the same rate as similarly-situated male employees; and (c) carrying out discriminatory terminations, demotions and/or job reassignments of female PR employees when the company reorganized its PR practice beginning in 2008 ....
(Am.Compl. ¶ 8.)
Plaintiffs assert claims for gender discrimination under Title VII (and under similar New York State and New York City laws) (Am.Compl. ¶¶ 204–25), pregnancy discrimination under Title VII and related violations of the Family and Medical Leave Act (Am.Compl. ¶¶ 239–71), as well as violations of the Equal Pay Act and Fair Labor Standards Act (and the similar New York Labor Law) (Am.Compl. ¶¶ 226–38).
The complaint seeks to bring the Equal Pay Act/FLSA claims as a “collective action” (i.e., opt-in) on behalf of all “current, former, and future female PR employees” employed by defendants in the United States “at any time during the applicable liability period” (Am.Compl. ¶¶ 179–80, 190–203), and as a class action on the gender and pregnancy discrimination claims and on the New York Labor Law pay claim (Am.Compl. ¶¶ 171–98). Plaintiffs, however, have not yet moved for collective action or class certification at this time.
Defendant MSL denies the allegations in the complaint and has asserted various affirmative defenses. (See generally Dkt. No. 19: MSL Answer.) Defendant Publicis is challenging the Court's jurisdiction over it, and the parties have until March 12, 2012 to conduct jurisdictional discovery. (See Dkt. No. 44: 10/12/11 Order.)
My Search, Forward article explained my understanding of computer-assisted review, as follows:
By computer-assisted coding, I mean tools (different vendors use different names) that use sophisticated algorithms to enable the computer to determine relevance, *184 based on interaction with (i.e., training by) a human reviewer.
Unlike manual review, where the review is done by the most junior staff, computer-assisted coding involves a senior partner (or [small] team) who review and code a “seed set” of documents. The computer identifies properties of those documents that it uses to code other documents. As the senior reviewer continues to code more sample documents, the computer predicts the reviewer's coding. (Or, the computer codes some documents and asks the senior reviewer for feedback.)
When the system's predictions and the reviewer's coding sufficiently coincide, the system has learned enough to make confident predictions for the remaining documents. Typically, the senior lawyer (or team) needs to review only a few thousand documents to train the computer.
Some systems produce a simple yes/no as to relevance, while others give a relevance score (say, on a 0 to 100 basis) that counsel can use to prioritize review. For example, a score above 50 may produce 97% of the relevant documents, but constitutes only 20% of the entire document set.
Counsel may decide, after sampling and quality control tests, that documents with a score of below 15 are so highly likely to be irrelevant that no further human review is necessary. Counsel can also decide the cost-benefit of manual review of the documents with scores of 15–50.
Andrew Peck, Search, Forward, L. Tech. News, Oct. 2011, at 25, 29.[2]
My article further explained my belief that Daubert would not apply to the results of using predictive coding, but that in any challenge to its use, this Judge would be interested in both the process used and the results:
[I]f the use of predictive coding is challenged in a case before me, I will want to know what was done and why that produced defensible results. I may be less interested in the science behind the “black box” of the vendor's software than in whether it produced responsive documents with reasonably high recall and high precision.
That may mean allowing the requesting party to see the documents that were used to train the computer-assisted coding system. (Counsel would not be required to explain why they coded documents as responsive or non-responsive, just what the coding was.) Proof of a valid “process,” including quality control testing, also will be important.
....
Of course, the best approach to the use of computer-assisted coding is to follow the Sedona Cooperation Proclamation model. Advise opposing counsel that you plan to use computer-assisted coding and seek agreement; if you cannot, consider whether to abandon predictive coding for that case or go to the court for advance approval.
After several discovery conferences and rulings by Judge Sullivan (the then-assigned District Judge), he referred the case to me for general pretrial supervision. (Dkt. No. 48: 11/28/11 Referral Order.) At my first discovery conference with the parties, both parties' counsel mentioned that they had been discussing an “electronic discovery protocol,” and MSL's counsel stated that an open issue was “plaintiffs reluctance to utilize predictive coding to try to cull down the” approximately three million electronic documents from the agreed-upon custodians. (Dkt. No. 51: 12/2/11 Conf. Tr. at 7–8.)[3] Plaintiffs' counsel clarified that MSL had “over simplified [plaintiffs'] stance on predictive coding,” i.e., that it was not opposed *185 but had “multiple concerns ... on the way in which [MSL] plan to employ predictive coding” and plaintiffs wanted “clarification.” (12/2/11 Conf. Tr. at 21.)
The Court did not rule but offered the parties the following advice:
Now, if you want any more advice, for better or for worse on the ESI plan and whether predictive coding should be used, ... I will say right now, what should not be a surprise, I wrote an article in the October Law Technology News called Search Forward, which says predictive coding should be used in the appropriate case.
Is this the appropriate case for it? You all talk about it some more. And if you can't figure it out, you are going to get back in front of me. Key words, certainly unless they are well done and tested, are not overly useful. Key words along with predictive coding and other methodology, can be very instructive.
I'm also saying to the defendants who may, from the comment before, have read my article. If you do predictive coding, you are going to have to give your seed set, including the seed documents marked as nonresponsive to the plaintiff's counsel so they can say, well, of course you are not getting any [relevant] documents, you're not appropriately training the computer.
(12/2/11 Conf. Tr. at 20–21.) The December 2, 2011 conference adjourned with the parties agreeing to further discuss the ESI protocol. (12/2/11 Conf. Tr. at 34–35.)
The ESI issue was next discussed at a conference on January 4, 2012. (Dkt. No. 71: 1/4/12 Conf. Tr.) Plaintiffs' ESI consultant conceded that plaintiffs “have not taken issue with the use of predictive coding or, frankly, with the confidence levels that they [MSL] have proposed....” (1/4/12 Conf. Tr. at 51.) Rather, plaintiffs took issue with MSL's proposal that after the computer was fully trained and the results generated, MSL wanted to only review and produce the top 40,000 documents, which it estimated would cost $200,000 (at $5 per document). (1/4/12 Conf. Tr. at 47–48, 51.) The Court rejected MSL's 40,000 documents proposal as a “pig in a poke.” (1/4/12 Conf. Tr. at 51–52.) The Court explained that “where [the] line will be drawn [as to review and production] is going to depend on what the statistics show for the results,” since “[p]roportionality requires consideration of results as well as costs. And if stopping at 40,000 is going to leave a tremendous number of likely highly responsive documents unproduced, [MSL's proposed cutoff] doesn't work.” (1/4/12 Conf. Tr. at 51–52; see also id. at 57–58; Dkt. No. 88: 2/8/12 Conf. Tr. at 84.) The parties agreed to further discuss and finalize the ESI protocol by late January 2012, with a conference held on February 8, 2012. (1/4/12 Conf. Tr. at 60–66; see 2/8/12 Conf. Tr.)
The first issue regarding the ESI protocol involved the selection of which custodians' emails would be searched. MSL agreed to thirty custodians for a “first phase.” (Dkt. No. 88: 2/8/12 Conf. Tr. at 23–24.) MSL's custodian list included the president and other members of MSL's “executive team,” most of its HR staff and a number of managing directors. (2/8/12 Conf. Tr. at 24.)
Plaintiffs sought to include as additional custodians seven male “comparators,” explaining that the comparators' emails were needed in order to find information about their job duties and how their duties compared to plaintiffs' job duties. (2/8/12 Conf. Tr. at 25–27.) Plaintiffs gave an example of the men being given greater “client contact” or having better job assignments. (2/8/12 Conf. Tr. at 28–30.) The Court held that the search of the comparators' emails would be so different from that of the other custodians that the comparators should not be included in the emails subjected to predictive coding review. (2/8/12 Conf. Tr. at 28, 30.) As a fallback position, plaintiffs proposed to “treat the comparators as a separate search,” but the Court found that plaintiffs could not describe in any meaningful way how they would search the comparators' emails, even as a separate search. (2/8/12 Conf. Tr. at 30–31.) Since the plaintiffs likely could develop the information needed through depositions of the comparators, the Court ruled that the comparators' emails would not be included in phase one. (2/8/12 Conf. Tr. at 31.)
*186 Plaintiffs also sought to include MSL's CEO, Olivier Fleuriot, located in France and whose emails were mostly written in French. (2/8/12 Conf. Tr. at 32–34.) The Court concluded that because his emails with the New York based executive staff would be gathered from those custodians, and Fleuriot's emails stored in France likely would be covered by the French privacy and blocking laws,[4] Fleuriot should not be included as a first-phase custodian. (2/8/12 Conf. Tr. at 35.)
Plaintiffs sought to include certain managing directors from MSL offices at which no named plaintiff worked. (2/8/12 Conf. Tr. at 36–37.) The Court ruled that since plaintiffs had not yet moved for collective action status or class certification, until the motions were made and granted, discovery would be limited to offices (and managing directors) where the named plaintiffs had worked. (2/8/12 Conf. Tr. at 37–39.)
The final issue raised by plaintiffs related to the phasing of custodians and the discovery cutoff dates. MSL proposed finishing phase-one discovery completely before considering what to do about a second phase. (See 2/8/12 Conf. Tr. at 36.) Plaintiffs expressed concern that there would not be time for two separate phases, essentially seeking to move the phase-two custodians back into phase one. (2/8/12 Conf. Tr. at 35–36.) The Court found MSL's separate phase approach to be more sensible and noted that if necessary, the Court would extend the discovery cutoff to allow the parties to pursue discovery in phases. (2/8/12 Conf. Tr. at 36, 50.)
The parties agreed on certain ESI sources, including the “EMC SourceOne [Email] Archive,” the “PeopleSoft” human resources information management system and certain other sources including certain HR “shared” folders. (See Dkt. No. 88: 2/8/12 Conf. Tr. at 44–45, 50–51.) As to other “shared” folders, neither side was able to explain whether the folders merely contained forms and templates or collaborative working documents; the Court therefore left those shared folders for phase two unless the parties promptly provided information about likely contents. (2/8/12 Conf. Tr. at 47–48.)
The Court noted that because the named plaintiffs worked for MSL, plaintiffs should have some idea what additional ESI sources, if any, likely had relevant information; since the Court needed to consider proportionality pursuant to Rule 26(b)(2)(C), plaintiffs needed to provide more information to the Court than they were doing if they wanted to add additional data sources into phase one. (2/8/12 Conf. Tr. at 49–50.) The Court also noted that where plaintiffs were getting factual information from one source (e.g., pay information, promotions, etc.), “there has to be a limit to redundancy” to comply with Rule 26(b)(2)(C). (2/8/12 Conf. Tr. at 54.)[5]
The parties agreed to use a 95% confidence level (plus or minus two percent) to create a random sample of the entire email collection; that sample of 2,399 documents will be reviewed to determine relevant (and not relevant) documents for a “seed set” to use to train the predictive coding software. (Dkt. No. 88: 2/8/12 Conf. Tr. at 59–61.) An area of disagreement was that MSL reviewed the 2,399 documents before the parties agreed to add two additional concept groups (i.e., issue tags). (2/8/12 Conf. Tr. at 62.) MSL suggested that since it had agreed to provide all 2,399 documents (and MSL's coding of them) to plaintiffs for their review, plaintiffs can code them for the new issue tags, and MSL will incorporate that coding into the system. (2/8/12 Conf. Tr. at 64.) Plaintiffs' vendor agreed to that approach. (2/8/12 Conf. Tr. at 64.)
To further create the seed set to train the predictive coding software, MSL coded certain *187 documents through “judgmental sampling.” (2/8/12 Conf. Tr. at 64.) The remainder of the seed set was created by MSL reviewing “keyword” searches with Boolean connectors (such as “training and Da Silva Moore,” or “promotion and Da Silva Moore”) and coding the top fifty hits from those searches. (2/8/12 Conf. Tr. at 64–66, 72.) MSL agreed to provide all those documents (except privileged ones) to plaintiffs for plaintiffs to review MSL's relevance coding. (2/8/12 Conf. Tr. at 66.) In addition, plaintiffs provided MSL with certain other keywords, and MSL used the same process with plaintiffs' keywords as with the MSL keywords, reviewing and coding an additional 4,000 documents. (2/8/12 Conf. Tr. at 68–69, 71.) All of this review to create the seed set was done by senior attorneys (not paralegals, staff attorneys or junior associates). (2/8/12 Conf. Tr. at 92–93.) MSL reconfirmed that “[a]ll of the documents that are reviewed as a function of the seed set, whether [they] are ultimately coded relevant or irrelevant, aside from privilege, will be turned over to” plaintiffs. (2/8/12 Conf. Tr. at 73.)
The next area of discussion was the iterative rounds to stabilize the training of the software. MSL's vendor's predictive coding software ranks documents on a score of 100 to zero, i.e., from most likely relevant to least likely relevant. (2/8/12 Conf. Tr. at 70.) MSL proposed using seven iterative rounds; in each round they would review at least 500 documents from different concept clusters to see if the computer is returning new relevant documents. (2/8/12 Conf. Tr. at 73–74.) After the seventh round, to determine if the computer is well trained and stable, MSL would review a random sample (of 2,399 documents) from the discards (i.e., documents coded as non-relevant) to make sure the documents determined by the software to not be relevant do not, in fact, contain highly-relevant documents. (2/8/12 Conf. Tr. at 74–75.) For each of the seven rounds and the final quality-check random sample, MSL agreed that it would show plaintiffs all the documents it looked at including those deemed not relevant (except for privileged documents). (2/8/12 Conf. Tr. at 76.)
Plaintiffs' vendor noted that “we don't at this point agree that this is going to work. This is new technology and it has to be proven out.” (2/8/12 Conf. Tr. at 75.) Plaintiffs' vendor agreed, in general, that computer-assisted review works, and works better than most alternatives. (2/8/12 Conf. Tr. at 76.) Indeed, plaintiffs' vendor noted that “it is fair to say [that] we are big proponents of it.” (2/8/12 Conf. Tr. at 76.) The Court reminded the parties that computer-assisted review “works better than most of the alternatives, if not all of the [present] alternatives. So the idea is not to make this perfect, it's not going to be perfect. The idea is to make it significantly better than the alternatives without nearly as much cost.” (2/8/12 Conf. Tr. at 76.)
The Court accepted MSL's proposal for the seven iterative reviews, but with the following caveat:
But if you get to the seventh round and [plaintiffs] are saying that the computer is still doing weird things, it's not stabilized, etc., we need to do another round or two, either you will agree to that or you will both come in with the appropriate QC information and everything else and [may be ordered to] do another round or two or five or 500 or whatever it takes to stabilize the system.
(2/8/12 Conf. Tr. at 76–77; see also id. at 83–84, 88.)
On February 17, 2012, the parties submitted their “final” ESI Protocol which the Court “so ordered.” (Dkt. No. 92: 2/17/12 ESI Protocol & Order.)[6] Because this is the first Opinion dealing with predictive coding, the Court annexes hereto as an Exhibit the provisions of the ESI Protocol dealing with the predictive coding search methodology.
On February 22, 2012, plaintiffs filed objections to the Court's February 8, 2012 rulings. (Dkt. No. 93: Pls. Rule 72(a) Objections; see also Dkt. No. 94: Nurhussein Aff.; Dkt. No. 95: Neale Aff.) While those objections are before District Judge Carter, a few comments are in order.
Plaintiffs' objections to my February 8, 2012 rulings assert that my acceptance of MSL's predictive coding approach “provides unlawful ‘cover’ for MSL's counsel, who has a duty under FRCP 26(g) to ‘certify’ that their client's document production is ‘complete’ and ‘correct’ as of the time it was made. FRCP 26(g)(1)(A).” (Dkt. No. 93: Pls. Rule 72(a) Objections at 8 n. 7; accord, id. at 2.) In large-data cases like this, involving over three million emails, no lawyer using any search method could honestly certify that its production is “complete”—but more importantly, Rule 26(g)(1) does not require that. Plaintiffs simply misread Rule 26(g)(1). The certification required by Rule 26(g)(1) applies “with respect to a disclosure.” Fed.R.Civ.P. 26(g)(1)(A) (emphasis added). That is a term of art, referring to the mandatory initial disclosures required by Rule 26(a)(1). Since the Rule 26(a)(1) disclosure is information (witnesses, exhibits) that “the disclosing party may use to support its claims or defenses,” and failure to provide such information leads to virtually automatic preclusion, see Fed. R. Civ. P. 37(c)(1), it is appropriate for the Rule 26(g)(1)(A) certification to require disclosures be “complete and correct.”
Rule 26(g)(1)(B) is the provision that applies to discovery responses. It does not call for certification that the discovery response is “complete,” but rather incorporates the Rule 26(b)(2)(C) proportionality principle. Thus, Rule 26(g)(1)(A) has absolutely nothing to do with MSL's obligations to respond to plaintiffs' discovery requests. Plaintiffs' argument is based on a misunderstanding of Rule 26(g)(1).[7]
Plaintiffs' objections also argue that my acceptance of MSL's predictive coding protocol “is contrary to Federal Rule of Evidence 702” and “violates the gatekeeping function underlying Rule 702.” (Dkt. No. 93: Pls. Rule 72(a) Objections at 2–3; accord, id. at 10–12.)[8]
Federal Rule of Evidence 702 and the Supreme Court's Daubert decision[9] deal with the trial court's role as gatekeeper to exclude unreliable expert testimony from being submitted *189 to the jury at trial. See also Advisory Comm. Notes to Fed.R.Evid. 702. It is a rule for admissibility of evidence at trial.
If MSL sought to have its expert testify at trial and introduce the results of its ESI protocol into evidence, Daubert and Rule 702 would apply. Here, in contrast, the tens of thousands of emails that will be produced in discovery are not being offered into evidence at trial as the result of a scientific process or otherwise. The admissibility of specific emails at trial will depend upon each email itself (for example, whether it is hearsay, or a business record or party admission), not how it was found during discovery.
Rule 702 and Daubert simply are not applicable to how documents are searched for and found in discovery.
Finally, plaintiffs' objections assert that “MSL's method lacks the necessary standards for assessing whether its results are accurate; in other words, there is no way to be certain if MSL's method is reliable.” (Dkt. No. 93: Pls. Rule 72(a) Objections at 13–18.) Plaintiffs' concerns may be appropriate for resolution during or after the process (which the Court will be closely supervising), but are premature now. For example, plaintiffs complain that “MSL's method fails to include an agreed-upon standard of relevance that is transparent and accessible to all parties .... Without this standard, there is a high-likelihood of delay as the parties resolve disputes with regard to individual documents on a case-by-case basis.” (Id. at 14.) Relevance is determined by plaintiffs' document demands. As statistics show, perhaps only 5% of the disagreement among reviewers comes from close questions of relevance, as opposed to reviewer error. (See page 18 n. 11 below.) The issue regarding relevance standards might be significant if MSL's proposal was not totally transparent. Here, however, plaintiffs will see how MSL has coded every email used in the seed set (both relevant and not relevant), and the Court is available to quickly resolve any issues.
Plaintiffs complain they cannot determine if “MSL's method actually works” because MSL does not describe how many relevant documents are permitted to be located in the final random sample of documents the software deemed irrelevant. (Pls. Rule 72(a) Objections at 15–16.) Plaintiffs argue that “without any decision about this made in advance, the Court is simply kicking the can down the road.” (Id. at 16.) In order to determine proportionality, it is necessary to have more information than the parties (or the Court) now has, including how many relevant documents will be produced and at what cost to MSL. Will the case remain limited to the named plaintiffs, or will plaintiffs seek and obtain collective action and/or class action certification? In the final sample of documents deemed irrelevant, are any relevant documents found that are “hot,” “smoking gun” documents (i.e., highly relevant)? Or are the only relevant documents more of the same thing? One hot document may require the software to be re-trained (or some other search method employed), while several documents that really do not add anything to the case might not matter. These types of questions are better decided “down the road,” when real information is available to the parties and the Court.
The decision to allow computer-assisted review in this case was relatively easy—the parties agreed to its use (although disagreed about how best to implement such review). The Court recognizes that computer-assisted review is not a magic, Staples–Easy–Button, solution appropriate for all cases. The technology exists and should be used where appropriate, but it is not a case of machine replacing humans: it is the process used and the interaction of man and machine that the courts needs to examine.
The objective of review in ediscovery is to identify as many relevant documents as possible, while reviewing as few non-relevant documents as possible. Recall is the fraction of relevant documents identified during a review; precision is the fraction of identified documents that are relevant. Thus, recall is a measure of completeness, while precision is *190 a measure of accuracy or correctness. The goal is for the review method to result in higher recall and higher precision than another review method, at a cost proportionate to the “value” of the case. See, e.g., Maura R. Grossman & Gordon V. Cormack, Technology–Assisted Review in E–Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, Rich. J.L. & Tech., Spring 2011, at 8–9, available at http://jolt.richmond.edu/vl7i3/article11.pdf.
The slightly more difficult case would be where the producing party wants to use computer-assisted review and the requesting party objects.[10] The question to ask in that situation is what methodology would the requesting party suggest instead? Linear manual review is simply too expensive where, as here, there are over three million emails to review. Moreover, while some lawyers still consider manual review to be the “gold standard,” that is a myth, as statistics clearly show that computerized searches are at least as accurate, if not more so, than manual review. Herb Roitblatt, Anne Kershaw, and Patrick Oot of the Electronic Discovery Institute conducted an empirical assessment to “answer the question of whether there was a benefit to engaging in a traditional human review or whether computer systems could be relied on to produce comparable results,” and concluded that “[o]n every measure, the performance of the two computer systems was at least as accurate (measured against the original review) as that of human re-review.” Herbert L. Roitblatt, Anne Kershaw & Patrick Oot, Document Categorization in Legal Electronic Discovery: Computer Classification v. Manual Review, 61 J. Am. Soc'y for Info. Sci. & Tech. 70, 79 (2010).[11]
Likewise, Wachtell, Lipton, Rosen & Katz litigation counsel Maura Grossman and University of Waterloo professor Gordon Cormack, studied data from the Text Retrieval Conference Legal Track (TREC) and concluded that: “[T]he myth that exhaustive manual review is the most effective—and therefore the most defensible—approach to document review is strongly refuted. Technology-assisted review can (and does) yield more accurate results than exhaustive manual review, with much lower effort.” Maura R. Grossman & Gordon V. Cormack, Technology–Assisted Review in E–Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, Rich. J.L. & Tech., Spring 2011, at 48.[12] The technology-assisted reviews in the Grossman–Cormack article also demonstrated significant cost savings over manual review: “The technology-assisted reviews require, on average, human review of only 1.9% of the documents, a fifty-fold savings over exhaustive manual review.” Id. at 43.
Because of the volume of ESI, lawyers frequently have turned to keyword searches to cull email (or other ESI) down to a more manageable volume for further manual review. Keywords have a place in production of ESI—indeed, the parties here used keyword searches (with Boolean connectors) to find documents for the expanded seed set to train the predictive coding software. In too *191 many cases, however, the way lawyers choose keywords is the equivalent of the child's game of “Go Fish.”[13] The requesting party guesses which keywords might produce evidence to support its case without having much, if any, knowledge of the responding party's “cards” (i.e., the terminology used by the responding party's custodians). Indeed, the responding party's counsel often does not know what is in its own client's “cards.”
Another problem with keywords is that they often are over-inclusive, that is, they find responsive documents but also large numbers of irrelevant documents. In this case, for example, a keyword search for “training” resulted in 165,208 hits; Da Silva Moore's name resulted in 201,179 hits; “bonus” resulted in 40,756 hits; “compensation” resulted in 55,602 hits; and “diversity” resulted in 38,315 hits. (Dkt. No. 92: 2/17/12 ESI Protocol Ex. A.) If MSL had to manually review all of the keyword hits, many of which would not be relevant (i.e., would be false positives), it would be quite costly.
Moreover, keyword searches usually are not very effective. In 1985, scholars David Blair and M. Maron collected 40,000 documents from a Bay Area Rapid Transit accident, and instructed experienced attorney and paralegal searchers to use keywords and other review techniques to retrieve at least 75% of the documents relevant to 51 document requests. David L. Blair & M.E. Maron, An Evaluation of Retrieval Effectiveness for a Full–Text Document–Retrieval System, 28 Comm. ACM 289 (1985). Searchers believed they met the goals, but their average recall was just 20%. Id. This result has been replicated in the TREC Legal Track studies over the past few years.
Judicial decisions have criticized specific keyword searches. Important early decisions in this area came from two of the leading judicial scholars in ediscovery, Magistrate Judges John Facciola (District of Columbia) and Paul Grimm (Maryland). See United States v. O'Keefe, 537 F.Supp.2d 14, 24 (D.D.C.2008) (Facciola, M.J.); Equity Analytics, LLC v. Lundin, 248 F.R.D. 331, 333 (D.D.C.2008) (Facciola, M.J.); Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 260, 262 (D.Md.2008) (Grimm, M.J.). I followed their lead with William A. Gross Construction Associates, Inc., when I wrote:
This Opinion should serve as a wake-up call to the Bar in this District about the need for careful thought, quality control, testing, and cooperation with opposing counsel in designing search terms or “keywords” to be used to produce emails or other electronically stored information (“ESI”).
....
Electronic discovery requires cooperation between opposing counsel and transparency in all aspects of preservation and production of ESI. Moreover, where counsel are using keyword searches for retrieval of ESI, they at a minimum must carefully craft the appropriate keywords, with input from the ESI's custodians as to the words and abbreviations they use, and the proposed methodology must be quality control tested to assure accuracy in retrieval and elimination of “false positives.” It is time that the Bar—even those lawyers who did not come of age in the computer era—understand this.
William A. Gross Constr. Assocs., Inc. v. Am. Mfrs. Mut. Ins. Co., 256 F.R.D. 134, 134, 136 (S.D.N.Y.2009) (Peck, M.J.).
Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases. While this Court recognizes that computer-assisted review is not perfect, the Federal Rules of Civil Procedure do not require perfection. See, e.g., Pension Comm. of Univ. of Montreal Pension Plan v. Banc of Am. Sec., 685 F.Supp.2d 456, 461 (S.D.N.Y.2010). Courts and litigants must be cognizant of the aim of Rule 1, to “secure the just, speedy, and inexpensive determination” of lawsuits. Fed.R.Civ.P. 1. That goal is further reinforced by the proportionality doctrine set forth in Rule 26(b)(2)(C), which provides that:
On motion or on its own, the court must limit the frequency or extent of discovery *192 otherwise allowed by these rules or by local rule if it determines that:
(i) the discovery sought is unreasonably cumulative or duplicative, or can be obtained from some other source that is more convenient, less burdensome, or less expensive;
(ii) the party seeking discovery has had ample opportunity to obtain the information by discovery in the action; or
(iii) the burden or expense of the proposed discovery outweighs its likely benefit, considering the needs of the case, the amount in controversy, the parties' resources, the importance of the issues at stake in the action, and the importance of the discovery in resolving the issues.
Fed.R.Civ.P. 26(b)(2)(C).
In this case, the Court determined that the use of predictive coding was appropriate considering: (1) the parties' agreement, (2) the vast amount of ESI to be reviewed (over three million documents), (3) the superiority of computer-assisted review to the available alternatives (i.e., linear manual review or keyword searches), (4) the need for cost effectiveness and proportionality under Rule 26(b)(2)(C), and (5) the transparent process proposed by MSL.
This Court was one of the early signatories to The Sedona Conference Cooperation Proclamation, and has stated that “the best solution in the entire area of electronic discovery is cooperation among counsel. This Court strongly endorses The Sedona Conference Proclamation (available at www.TheSedona Conference.org).” William A. Gross Constr. Assocs., Inc. v. Am. Mfrs. Mut. Ins. Co., 256 F.R.D. at 136. An important aspect of cooperation is transparency in the discovery process. MSL's transparency in its proposed ESI search protocol made it easier for the Court to approve the use of predictive coding. As discussed above on page 10, MSL confirmed that “[a]ll of the documents that are reviewed as a function of the seed set, whether [they] are ultimately coded relevant or irrelevant, aside from privilege, will be turned over to” plaintiffs. (Dkt. No. 88: 2/8/12 Conf. Tr. at 73; see also 2/17/12 ESI Protocol at 14: “MSL will provide Plaintiffs' counsel with all of the non-privileged documents and will provide, to the extent applicable, the issue tag(s) coded for each document .... If necessary, counsel will meet and confer to attempt to resolve any disagreements regarding the coding applied to the documents in the seed set.”) While not all experienced ESI counsel believe it necessary to be as transparent as MSL was willing to be, such transparency allows the opposing counsel (and the Court) to be more comfortable with computer-assisted review, reducing fears about the so-called “black box” of the technology.[14] This Court highly recommends that counsel in future cases be willing to at least discuss, if not agree to, such transparency in the computer-assisted review process.
Several other lessons for the future can be derived from the Court's resolution of the ESI discovery disputes in this case.
First, it is unlikely that courts will be able to determine or approve a party's proposal as to when review and production can stop until the computer-assisted review software has been trained and the results are quality control verified. Only at that point can the parties and the Court see where there is a clear drop off from highly relevant to marginally relevant to not likely to be relevant documents. While cost is a factor under Rule 26(b)(2)(C), it cannot be considered in isolation from the results of the predictive coding process and the amount at issue in the litigation.
Second, staging of discovery by starting with the most likely to be relevant sources (including custodians), without prejudice to the requesting party seeking more after conclusion of that first stage review, is a way to control discovery costs. If staging requires a longer discovery period, most judges should be willing to grant such an extension. (This Judge runs a self-proclaimed “rocket docket,” but informed the parties here of the Court's willingness to extend the discovery cutoff if necessary to allow the staging of custodians and other ESI sources.)
*193 Third, in many cases requesting counsel's client has knowledge of the producing party's records, either because of an employment relationship as here or because of other dealings between the parties (e.g., contractual or other business relationships). It is surprising that in many cases counsel do not appear to have sought and utilized their client's knowledge about the opposing party's custodians and document sources. Similarly, counsel for the producing party often is not sufficiently knowledgeable about their own client's custodians and business terminology. Another way to phrase cooperation is “strategic proactive disclosure of information,” i.e., if you are knowledgeable about and tell the other side who your key custodians are and how you propose to search for the requested documents, opposing counsel and the Court are more apt to agree to your approach (at least as phase one without prejudice).
Fourth, the Court found it very helpful that the parties' ediscovery vendors were present and spoke at the court hearings where the ESI Protocol was discussed. (At ediscovery programs, this is sometimes jokingly referred to as “bring your geek to court day.”) Even where as here counsel is very familiar with ESI issues, it is very helpful to have the parties' ediscovery vendors (or in-house IT personnel or in-house ediscovery counsel) present at court conferences where ESI issues are being discussed. It also is important for the vendors and/or knowledgeable counsel to be able to explain complicated ediscovery concepts in ways that make it easily understandable to judges who may not be tech-savvy.
This Opinion appears to be the first in which a Court has approved of the use of computer-assisted review. That does not mean computer-assisted review must be used in all cases, or that the exact ESI protocol approved here will be appropriate in all future cases that utilize computer-assisted review. Nor does this Opinion endorse any vendor (the Court was very careful not to mention the names of the parties' vendors in the body of this Opinion, although it is revealed in the attached ESI Protocol), nor any particular computer-assisted review tool. What the Bar should take away from this Opinion is that computer-assisted review is an available tool and should be seriously considered for use in large-data-volume cases where it may save the producing party (or both parties) significant amounts of legal fees in document review. Counsel no longer have to worry about being the “first” or “guinea pig” for judicial acceptance of computer-assisted review. As with keywords or any other technological solution to ediscovery, counsel must design an appropriate process, including use of available technology, with appropriate quality control testing, to review and produce relevant ESI while adhering to Rule 1 and Rule 26(b)(2)(C) proportionality. Computer-assisted review now can be considered judicially-approved for use in appropriate cases.
SO ORDERED.

Footnotes

To correct the many blogs about this case, initiated by a press release from plaintiffs' vendor—the Court did not order the parties to use predictive coding. The parties had agreed to defendants' use of it, but had disputes over the scope and implementation, which the Court ruled on, thus accepting the use of computer-assisted review in this lawsuit.
From a different perspective, every person who uses email uses predictive coding, even if they do not realize it. The “spam filter” is an example of predictive coding.
When defense counsel mentioned the disagreement about predictive coding, I stated that: “You must have thought you died and went to Heaven when this was referred to me,” to which MSL's counsel responded: “Yes, your Honor. Well, I'm just thankful that, you know, we have a person familiar with the predictive coding concept.” (12/2/11 Conf. Tr. at 8–9.)
See, e.g., Societe Nationale Industrielle Aerospatiale v. U.S. Dist. Ct. for the S.D. of Iowa, 482 U.S. 522, 107 S.Ct. 2542, 96 L.Ed.2d 461 (1987); see also The Sedona Conference, International Principles on Discovery, Disclosure & Data Protection (2011), available at http://www.thesedonaconference.org/dltForm?did=IntlPrinciples2011.pdf.
The Court also suggested that the best way to resolve issues about what information might be found in a certain source is for MSL to show plaintiffs a sample printout from that source. (2/8/12 Conf. Tr. at 55–56.)
Plaintiffs included a paragraph noting its objection to the ESI Protocol, as follows:
Plaintiffs object to this ESI Protocol in its entirety. Plaintiffs submitted their own proposed ESI Protocol to the Court, but it was largely rejected. The Court then ordered the parties to submit a joint ESI Protocol reflecting the Court's rulings. Accordingly, Plaintiffs jointly submit this ESI Protocol with MSL, but reserve the right to object to its use in this case.
(ESI Protocol ¶ J.1 at p. 22.)
Rule 26(g)(1) provides:
(g) Signing Disclosures and Discovery Requests, Responses, and Objections.
(1) Signature Required; Effect of Signature. Every disclosure under Rule 26(a)(1) or (a)(3) and every discovery request, response, or objection must be signed by at least one attorney of record in the attorney's own name .... By signing, an attorney or party certifies that to the best of the person's knowledge, information, and belief formed after a reasonable inquiry:
(A) with respect to a disclosure, it is complete and correct as of the time it is made; and
(B) with respect to a discovery request, response, or objection, it is:
(i) consistent with these rules and warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law, or for establishing new law;
(ii) not interposed for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation; and
(iii) neither unreasonable nor unduly burdensome or expensive, considering the needs of the case, prior discovery in the case, the amount in controversy, and the importance of the issues at stake in the action.
Fed.R.Civ.P. 26(g)(1) (emphasis added).
As part of this argument, plaintiffs complain that although both parties' experts (i.e., vendors) spoke at the discovery conferences, they were not sworn in. (Pls. Rule 72(a) Objections at 12: “To his credit, the Magistrate [Judge] did ask the parties to bring [to the conference] the ESI experts they had hired to advise them regarding the creation of an ESI protocol. These experts, however, were never sworn in, and thus the statements they made in court at the hearings were not sworn testimony made under penalty of perjury.”) Plaintiffs never asked the Court to have the experts testify to their qualifications or be sworn in.
Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993).
The tougher question, raised in Klein Prods. LLC v. Packaging Corp. of Am. before Magistrate Judge Nan Nolan in Chicago, is whether the Court, at plaintiffs' request, should order the defendant to use computer-assisted review to respond to plaintiffs' document requests.
The Roitblatt, Kershaw, Oot article noted that “[t]he level of agreement among human reviewers is not strikingly high,” around 70–75%. They identify two sources for this variability: fatigue (“A document that they [the reviewers] might have categorized as responsive when they were more attentive might then be categorized [when the reviewer is distracted or fatigued] as non-responsive or vice versa.”), and differences in “strategic judgment.” Id. at 77–78. Another study found that responsiveness “is fairly well defined, and that disagreements among assessors are largely attributable to human error,” with only 5% of reviewer disagreement attributable to borderline or questionable issues as to relevance. Maura R. Grossman & Gordon V. Cormack, Inconsistent Assessment of Responsiveness in E–Discovery: Difference of Opinion or Human Error? 9 (DESI IV: 2011 ICAIL Workshop on Setting Standards for Searching Elec. Stored Info. in Discovery, Research Paper), available at http://www.umiacs.umd.edu/õard/desi4/papers/grossman3.pdf
Grossman and Cormack also note that “not all technology-assisted reviews ... are created equal” and that future studies will be needed to “address which technology-assisted review process(es) will improve most on manual review.” Id.
See Ralph C. Losey, “Child's Game of ‘Go Fish’ is a Poor Model for e-Discovery Search,” in Adventures in Electronic Discovery 209–10 (2011).
It also avoids the GIGO problem, i.e., garbage in, garbage out.