Why Lawyers Don’t Care About AI-Powered Analytics – By Rick Merrill

Why Lawyers Don’t Care About AI-Powered Analytics

By Rick Merrill

There’s a famous story about the CEO of a major construction hardware company who, when speaking to his management team and board of directors, reminded everyone that customers don’t buy the company’s drills because they want a drill. Customers buy from them because they want to make holes. The drill, its features, its inner workings, don’t matter.

It’s no different in modern legal technology. The market is overrun by poorly defined jargon like “artificial intelligence,” “analytics,” “blockchain,” “big data,” and more. What exactly these things are and how they work don’t matter one bit to the litigator who needs insight into how and why her judge makes rulings.

Clearly, this bait-and-switch is not unique to the legal profession. Numerous companies, across a broad range of industries use the term “artificial intelligence” as marketing jargon, even where the core of their product does not rely on cutting-edge AI.

Look at automobile manufacturers adding “eco” to their engines, or every company under the sun now claiming to use blockchain in some fashion. In recent years, we have seen legal tech companies from every corner of the market devote promote and sell “AI-based” tools. By leaning too hard on the immense “possibilities” of AI, legal tech companies may be deterring lawyers from embracing products that would, in fact, make their jobs easier.

Such marketing claims should be the least effective in the legal industry. Lawyers are not concerned with technology works – they want to know if it’s effective. First and foremost, lawyers care about the results that new products can deliver – for example, does this tool increase the likelihood of a positive outcome for my client?

That said, for the sophisticated buyer of legal technology, a working understanding of some of these terms is required to be able to separate the good products from the bad. Confusing technobabble makes purchasing decisions around legal tech harder than it should be while making inferior products sound better than they are.

The definition I tend to use for AI, broadly speaking, is the “use of computers to perform tasks that normally require human intelligence.” With Gavelytics, AI truly “drives” our product. We use machine learning, natural language processing, neural nets and deep learning to give our users access to vital information about the judges assigned to their cases. (But you may notice we don’t focus on AI in our marketing.) As a former litigator, I can say with great confidence that litigators are less concerned with how “cutting edge” a product is than with whether it will help them develop a stronger litigation strategy and lead to the best outcome for the client.

“Analytics” is another term that is used and abused in the legal tech industry. The definition I like for analytics is “the computer-aided discovery of meaningful, useful and accurate patterns in huge data sets that enable users to act on such discoveries in new ways.” What does this mean? To put it simply: a product that displays a chart is not analytics.

A product that uncovers insignificant data or draws inconsequential connections is not analytics.

A product that displays data with low relevance, or that isn’t accurate and valuable, cannot truly be called analytics. Numbers and graphics without actionable value are just that – numbers and graphics. Searchable dockets – or more broadly, limited underlying data sets – don’t provide the level of insight lawyers need to inform their litigation strategy.

Data analytics, on the other hand, can deliver important insights; real AI-powered tools such as judicial analytics can open up new possibilities for lawyers and improve client outcomes. In today’s hyper-competitive legal marketplace, true AI is not “nice to have,” it’s a “must have.”

But in evaluating AI-based tools, lawyers need to see through clever marketing on their way to selecting the product that helps them deliver superior client service, because in the end, that’s all that matters.

Rick Merrill (pictured above) is founder and CEO of AI-driven judicial analytics company Gavelytics and a former litigator at Am Law 100 firm Greenberg Traurig.


What do you think? Is Rick right on this? Feel free to give your views in the comment section below. 

9 Comments

  1. Unfortunately lawyers do care, but mostly from the perspective of mitigating their own risk, not utilising technology for client outcomes. One of the greatest barriers to adoption of useful technology is a lack of understanding, and of course, buzzwords don’t necessarily help, but they’re a natural part of any new technology adoption curve.

    I agree that ultimately we’re hiring technology for a ‘job to be done’ but we can’t underestimate the plethora of ways you can approach a problem. And in order to approach a problem in the best way possible, the market needs to make an informed decision, so the details do matter. Especially to lawyers concerned about security, confidentiality and regulatory details.

    AI and blockchain are essentially a means to an end, as stated, and the buzzwords themselves will fade into the background, similar to how we don’t talk about TCP/IP for our wifi, we just want to surf some waves on the interwebs..

    I like Gavelytics’ no nonsense approach to their messaging, ‘what job am I hiring you for’..

    • Thanks for the comment, Stevie. I think lawyers should care if the product is reliable and can do what it claims to do, but beyond that, I don’t think most litigators need to really monitor exactly what is happening behind the curtain. That said, the buyers of legal tech for large law firms, who are often not lawyers, do need a working understanding of the technology and terminology at issue, lest they be mislead by inferior products or marketing jargon.

  2. To litigate (Rick) or to avoid litigation (Nick, also a former litigator), that is the question. When litigation is afoot, an AI-enabled tool is less important than the results achieved by the litigator. But an AI-enabled tool to identify risk is sine qua non to a “less litigation” experience achieved by the client.

    • Thanks, Nick. AI-powered tools aimed at proactively reducing the amount or severity of litigation are very much in their infancy. Over the next several years, however, look for this segment to rapidly expand in size and scope.

  3. Agree that the majority of AI labeled products are not Artificial Intelligence, but rather Aided Intelligence. While you say that builders want to purchase the ability to “make holes” and don’t care about the quality of the drill, that analogy doesn’t exactly fit the legal tech sector. As you state in your article, text analytics for large document sets need to deliver insights that are both granular & accurate. It’s the classic debate of precision vs. recall. The “actionable value” that you discuss is largely driven by the technology used, for example, whether a machine learning algorithm is trained that delivers 65% accuracy or NLP (computational linguistics) is used to extract insights at 85%+ accuracy. So, I’d say that the “insides” matter when purchasing legal tech, otherwise your holes might be crooked.

    • Thanks, Peter. All analogies are imperfect, of course, and we do agree that products must be reliable to be useful. If the drill is crooked, you won’t get good holes. Similarly, if a legal tech analytics product doesn’t have a rock solid underlying data set or is really just old tech with marketing jargon claiming to be new (AI) tech, then a user is not likely to get good results. The distinction we’re trying to make is the need of the end users of the product to understand it’s inner workings vs. the buyer of such products (often a non-lawyer at big law firms) and their need to understand such things.

  4. I wholeheartedly agree with Rick‘s statement:

    In today’s hyper-competitive legal marketplace, true AI is not “nice to have,” it’s a “must have.”

    Ah, but how to spot „true AI“ if you are, like me, only a simple lawyer? There is the rub: You need to try out each of the offered AI solutions, so, go beyond the marketing speeches, and try out, what is on offer. LegalTech companies should be able to let lawyers do that, with test logins etc. for a few weeks.

    • Tom, we think the trick is for lawyers to want a product because of what it does, and the buyer of such products (who, at big law firms, is often neither a lawyer nor the end user) to want the product both because of what it does and because of the way it works and likelihood of being a useful, reliable product.

  5. Drills and holes are physical, tangible. Using Google to search for opinions of judges is intangible, and a lawyer who did that needs to know the limitations of the search software and the availability of judicial opinions online. Likewise, a multiple regression model that predicts which associates are more likely to make partner rests on intangibles. Senior lawyers should not rely on the predictions of such a model unless they have some sense of what linearity implies, whether interaction terms could be important, the possibility that logistic regression makes more sense (Make Partner or Not Make Partner) or that residuals should show reasonably symmetric variance, that R squared tells something about how much is missing from the model, what does it mean to “control for gender”, a p-value below 0.05 denotes a statistically significant association, and more. The deep insights of machine learning — of which regression is a mild form — deserves a commensurate amount of intuitive understanding. Or, lawyers will be trying to benefit their clients or firms with a powerful tool that can be completely misapplied. Lawyers do not have to understand the algorithms or mathematics (mostly linear algebra and calculus) but they should insist on a broad understanding of limitations and requirements.

Comments are closed.