Still Time to Register for Georgetown Law eDiscovery Training Academy

April 21, 2017

Supreme Court

Georgetown Law’s eDiscovery Training Academy is guaranteed to provide you with a unique learning experience. The Academy’s full-week curriculum will give you a total immersion in the subject of eDiscovery, featuring a highly personalized and interactive instructional approach designed to foster an intense connection between all students and a renowned faculty.

The Academy has been designed by experts to be a challenging experience leading to a comprehensive understanding of the discipline. It is demanding, but it will be one of your most exciting and successful learning experiences if you are determined to invest the time and effort.

This years Academy will be held from June 4 -9.  To register for the Academy, please visit this page.

Bill Hamilton of U of Florida Levin College of Law & Tom O’Connor of Advanced Discovery talk computer basics for lawyers on a free webinar

March 1, 2017

Join Bill and Tom on Wed March 8 at 12PM Eastern as they discuss “Computer Basics for Lawyers: Building the Foundation of E-Discovery Competence”

This one-hour program will present for lawyers an overview of computer and network operation basics and illustrate why understanding basic computer operations and architecture is critical for a successful e-discovery practice. The program will build from a discussion of computer logical gates to understanding the structure of computer files as collections of on and off bits.  It will also explain basic computer programing and how a computer performs such task as adding and “remembers.”

From this foundation the program will explore the operation and implications computer peripherals, the range of computer devices, computer networks, social media, cloud computing, and the emerging “Internet of Things.” The program will emphasize how the various operational and architectural features of computers and networks impacts and triggers the decisions lawyers must make when navigating the preservation, collection, processing, review, and production phases of electronic discovery.

Free registration is available at:


December 11, 2016

A funny thing happened on the way to the webinar. On Wed Dec. 14th, Advanced Discovery is presenting a live webinar entitled “New Developments in Analytics”. (link and full information at the end of this article) But while preparing the slide deck and speaking with several of our internal experts on the AD Consulting Team (special thanks to Susan Stone, Julia Byerson and Todd Mansbridge for all their feedback) as well as several clients about the topic, I found that we had a surprising lack of agreement on some of the key terms.

I had always viewed TAR (Technology Assisted Review) as the granddaddy of this discussion because many years ago I felt that TAR essentially meant keyword searching. I also felt it then evolved into Predictive Coding and later in the game the phrase ”analytics” was grabbed from big data people to refer to some data analytics tools.

So my world view of TAR looked something like this:

Structured Analytics
•  Email threading
•  Near Duplicate detection                                                                         •  Language detection

Conceptual Analytics
• Keyword expansion
• Conceptual clustering
• Categorization
• Predictive Coding

And a recent article by another vendor expressed the view that there are three classes of analytics – structured, conceptual and predictive, with predictive including TAR.

Finally, this graphic from Relativity shows that their world view of RAR (Relativity Assisted Review) appears to be one all-encompassing definition.



But other people were looking at these terms from a different perspective. One of Solutions consultants elaborated that:

Conceptual indexing is an internal (non-client facing) analytics tool.
Predictive coding is a class of workflows that can sit on top of different internal analytics tools.
RAR is a product (or a feature of a product) that combines both the internal analytics tool of conceptual indexing with a repeatable, defined predictive coding workflow.

Another of our experts expressed it much more simply:

Predictive coding is a process not a product or service.

And of course, I had to add to the confusion by asking where CAL (Continuous Active Learning) fit into this hierarchy. One of our senior analytics gurus responded to that query with:

In my opinion CAL is a workflow. It selects seed documents based on categorization which is live rather than passive which requires you to submit after you have reviewed a set number.

Finally, when I ran all this by Matthew Verga, the Advanced Discovery VP of Marketing Content, he fell clearly in the tools vs workflow camp, saying:

TAR is process that uses analytic tools to amplify human decision-making. Relativity Assisted Review is a form of TAR [and] is powered by categorization, an analytic tool. Neither Analytics nor TAR contain the other as a subset, they are different categories of things

And indeed it is this workflow paradigm that is much more prevalent today, as we will discuss in the webinar. But what I’m interested in is hearing what YOU think. Analytics, TAR, Predictive Coding …. how do YOU define these terms and how do you use them in your work with ESI?

Drop me a note at  and join us on Wednesday at 1pm EST for an hour-long nuts and bolts discussion about how to use technology in your eDiscovery practice. I’ll be joined by Anne Bentley McCray from McGuireWoods LLP, an attorney experienced in working with ESI to discuss these concepts and others.

The registration page can be found at:


September 3, 2016

It’s Friday morning and ILTACON16 is in the books.  This is my favorite conference of the year, for several reasons. First, ILTA is a user group for IT folks at law firms, which means they have a very degree of technical understanding. Second, they are interested in solutions that work well in their IT structure so they have a wider view of technical specifications.  And third, they talk with each other about vendors and solutions so they are well versed in the overall market tensions and variations.

The show was well attended as always, with approximately 1500 people on site. This made for good interaction in the conference venue, the National Harbor Gaylord. But the size of the venue also meant that the exhibit hall had plenty of elbow room. With 1500 attendees and close to 200 vendors in the hall, this was a welcome change from other shows with small uncomfortable venues.

Another feature of ILTACON is close placement of social events so that vendor parties and receptions for special groups were easy to find and therefore attend.  My favorite this year was the Bryan U reception at the Public House which had a great attendance of ESI luminaries including Michael Arkfeld, Casey Flaherty, Scott Cohen, Craig Ball, Ian Campbell and Kim Taylor. Host Bill Hamilton showed us a short video trailer discussing the schools latest project, an online ESI competency course.  It looks very promising and more information will be forthcoming on the schools web site.

The educational sessions were, as always, tremendous. Three and a half days of multiple sessions on a host of technical issues.  Security was a topic of high interest as was info governance with overtones of ediscovery.  And panels on project management and metrics with Mike Quartararo of Stroock & Stroock & Lavan and Scott Cohen of Winston Strawn were big draws.

Mike also was at the Authors Corner showing off his new book Project Management in Electronic Discovery. It is literally the first book on the subject and was a big hit. KCura even bought a number of copies to give away in their booth.  Great job Mike

And I was personally heartened to see the attendance at two lit support specific panels I was speaking on that were held Thursday afternoon, the last slot of the conference for educational sessions.  On the first one I was the moderator for A Road Map To Gathering and Analyzing Client Discovery Data Across Matters which featured AD’s own Kate Head as a panelist with ILTA stalwart, Chad Papenfuss, Litigation Support Services Manager at Kirkland & Ellis, roving the audience with a microphone and prompting discussions. Great session!

The second was the lit support groups conference ending  annualGather ‘Round for a Litigation Support Roundtable.  Great turnout of over 40 people including Kate, Chad, Julie Brown of Vorys, Sater, Seymour and Pease and Craig Ball with a lively discussion on new technology and trends that left us heading home on a high note.

Finally I must note that this ILTACON was the swan song for Executive Director Randi Mayes, who has announced her retirement.  I’ve known Randi for more years than either one of us cares to admit and she has always been a great leader and an even better person. We’ll all miss her.

All in all a great conference with excellent content and attendee discussions. I highly recommend it and hope to see you more of you next year for ILTACON17 at Mandalay Bay in Las Vegas.

Tech Assessment and You

August 22, 2016

From this weeks Advanced Discovery Advanced Discovery blog:

I’ve been following the blog series on Insourcing vs. Outsourcing, by my Advanced Discovery colleague Matthew Verga, and found this week’s chapter especially interesting. The series is basically a more detailed deep dive into the topic that Matthew and I addressed in a webinar a while back (you can see a replay of the presentation here:

The most recent installment is called Organizational Self-Assessment: Technology Factors and can be found on the Advanced Discovery blog page at  The topic is Technology Factors which, as Matthew defines in this context, refers to an organization’s overall technology resources, sophistication, and comfort level.  In reading that, I came to realize that when we did our webinar, we didn’t mention the wonderful tech audit tool for attorneys.

The tool was first developed by Casey Flaherty ( ) when he was Corporate Counsel at Kia Motors. The short version is that while at Kia, Casey decided to test the tech skill level of the company’s outside counsel.  A full background of that story can be found in an ABA Journal article at

The more recent development is, that after leaving Kia, Casey started doing consulting work, and teamed up with Professor Andrew Perlman, Dean of Suffolk University Law School (, to create a new tech test he calls the tech audit. Suffolk has a legal technology think tank called the Institute on Law Practice Technology & Innovation (, and together they have created a Legal Tech Assessment project ( This project uses the tech audit to not only show how much you know (or do not know), but also how much time you waste on basic tech tasks when you don’t know enough.

It’s a fascinating story with equally fascinating results.  Take a look at some of the links to read about the results. You’ll be surprised.

Or not.

And don’t forget to follow the rest of Matthews’s series. Next week he’ll be writing about financial factors, the third of his four categories of key factors for organizational self-assessment. And later in the month, he’ll begin the fourth category, human resources factors before finally turning to key takeaways from the entire series.  Don’t miss it.

Help The Folks Flooded Out In Looziana

August 16, 2016

PAy It Forward

TAR and Keywords and Proportionality: Oh My!!

August 10, 2016

I’ve waited a bit to write this post because I wanted to see what my colleagues were saying about the latest opinion from Judge Peck. In ED circles, a new ESI opinion from Judge Peck is more highly anticipated than the next Bruce Springsteen CD, except maybe in the Facciola household where The  Boss is revered just below … well, actually I’m not sure his status is below that of anything in the Facciola home except, of course, Mrs. Facciola.

Earlier this week Judge Peck opined in the case of Hyles v. New York City ( No. 10 Civ. 3119 (S.D.N.Y. Aug. 1, 2016) that proportionality trumped TAR. And he didn’t beat around the bush about it, stating in the very first paragraph of the order:

“The key issue is whether, at plaintiff Hyles’ request, the defendant City (i.e., the responding party) can be forced to use TAR (technology assisted review, aka predictive coding) when the City prefers to use keyword searching. The short answer is a decisive “NO.” “

His reasoning was, of course, that the absent an agreement of the parties as to a specific search protocol, the applicable standard is the Sedona Principle 6, which holds that

Responding parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information. (The Sedona Principles: Second Edition, Best Practices Recommendations & Principles for Addressing Electronic Document Production, Principle 6 ,

Well the Twitterverse exploded with comments about how Judge Peck declined to order the parties to use TAR this and Judge Peck pulls back on TAR enthusiasm that. In the spirit of the impending football season all I can say is “Come on man”

First off, we all know that Judge Peck has never ordered anyone to use TAR.  He’s entered orders in several cases where the parties agreed to use TAR, a factor that had not happened in the Hyles case.

And this lack of a fundamental understanding of the fact that in the legal profession (that’s profession, not industry) the word “order” can be either a noun or a verb.  I lay this lack of understanding squarely at the feet of the ever increasing assimilation of eDiscovery software and services companies by people who have no legal background. I’ve said it before and I won’t go off on that particular rant again here.

Rather I’d like to just point out one part of the proportionality debate that seems to be missing.  Judge Peck in Hyles mentions cooperation and speed of process, and refers to the Tax Court decision in Dynamo Holdings Ltd. P’ship v. Comm’r of Internal Revenue 143 T.C. 9, 2014 WL 4636526 at *3 (2014, which spoke to the same considerations.

But in his decision, J Peck notes on page 3 that “… in general, TAR is cheaper, more efficient and superior to keyword searching.”.   I think that if I say “not so fast” one more time in a column that I’m going to hear from Lee Corsos attorneys but I have to say that I don’t believe the issue of “cheaper” has been clearly established.  Even in Hyles, J Peck says at Fn 2 that “The Court acknowledges that some vendor pricing models charge more for TAR than for keywords. Usually any such extra cost is more than offset by cost savings in review time.”

I respectfully argue that there has been no empirical validation of that statement that I have seen.  Now it may very well be that vendors have filed briefs in matters that address that point or even presented substantiation for such a positon during the submission of attorney fee claims in cases that I have not seen. So what I’d really like to see is a case study that shows the efficiency based on price savings not time savings of TAR on a particular set of documents.

Not that time savings is irrelevant or should not even be the deciding factor. But it should be just that: a factor. One factor of several to be weighed in the process of which tool to use.

I note with interest that David Horrigan,  E-Discovery Counsel and Legal Content Director at kCura, in a blog post on another case this week, the 10th US Circuit Court of Appeals’ decision last week in Xiong v. Knight Trans (see ) mentioned that  “We’ve always been skeptical of attempts to use the 1985 Blair and Maron study to argue that keyword searches are only 20 percent accurate”.  I’ve also disagreed with the general proposition that TAR is always better than keyword searching ( see my post Reports of the Death of Keyword Search Are Greatly Exaggerated at ) and I think the point here is the same one that Judge Peck makes on page 5 of the Order in Hyles.

“ It is not up to the Court, or the requesting party (Hyles), to force the City as the responding party to use TAR when it prefers to use keyword searching.  … While Hyles may well be correct that production using keywords may not be as complete as it would be if TAR were used (7/18/16 Ltr. at 4-5), the standard is not perfection, or using the “best” tool (see 7/18/16 Ltr. at 4), but whether the search results are reasonable and proportional. Cf. Fed. R. Civ. P. 26(g)(1)(B). “

And as one consultant in our field (who wished to stay above the fray so I will not use his name) said to me recently:

“Choice of predictive coding, managed review (with or without validated search), or just validated search does NOT pre-determine success.  … So a bad protocol might lead to poor results in all three, and a good protocol might turn south in all three if calibration and QC is missing, or if it is improperly applied.  Ultimately it is only “better” if a reasonable production is made without substantial critical documents left on the cutting room floor.”

As I said several years ago in another column about another issue   ( ) :

“It’s the archer not the arrow”.