Happiness is….Accepting AI’s Limitations

Genuine happiness comes from being at one with the truth. So, let’s face facts, ‘AI’ tools are not perfect. They will always miss things, sometimes make the wrong decisions, and generally operate imperfectly. Once you can accept that and find ways to leverage such tools for use cases where that truth fits, then you are on the road to happiness. Let me explain.

AI’s Perfection Problem

It was 2016 and the legal AI buzz was at its peak, mostly driven at the time by companies focusing on use cases such as providing an exhaustive doc review in M&A due diligence, and relying primarily on natural language processing (NLP).

I was invited to a dinner at a major law firm to give a short talk about this technology. Despite many having been primed by the idea that ‘the end of lawyers’ was soon to arrive and given the mood of the moment the talk stuck to the facts and went well, or so I thought. Then came the first question from one of the partners sitting around the table: ‘But, how can this ever be more accurate than a human lawyer?’ With the implication that: if it was not, then why use it?

I replied: ‘Perhaps the real question here is whether human lawyers are as accurate as they think they are?’

This raised a few chuckles and seemed to calm the mood. We then discussed the way that this technology can handle incredible volumes of data; that when managed well was way more efficient (note: this seemed to aggravate things for a reason that I hadn’t spent too much time considering at that point); and that with training the accuracy would steadily improve.

And so again the discussion went back to accuracy and much time and energy was spent on this. The lawyers were not going to let that point go.

Looking back it’s easy to see why things got so fixated on that one point. At the time – and thankfully it doesn’t really happen now – the debate about ‘AI’, not just in the law but in all sectors of the economy, was all about ‘replacement’.

‘We welcome our AI overlords!’ people would joke sardonically, assuming that by the end of the decade there really would be mass unemployment because of this new software. (And which of course has not happened.)

Lawyers had been primed to see this type of technology as ‘competition’. It’s understandable that they became defensive and attacked (quite correctly it turns out) the one thing that is NLP’s weakest point, i.e. its accuracy.

Human lawyers may not be perfect either, but it’s a well-known legal technique to win a case by damaging the credibility of the opponent.

Of course, you can alleviate much (but not all…) of that inaccuracy with plenty of training, whether pre-arranged or handled by the user. But will the NLP system still miss things? Yes, of course. Will unique language and unusual documents not gel well with what the NLP in that instance has been trained upon? Of course.

So, that means the lawyers at the dinner table back in 2016 were absolutely right: AI is imperfect.

In which case, why are we still using this technology?

Embracing AI’s Limitations – It’s All in the Use Case

The reason why we are still using this tech, and in fact why its use is proliferating, is in part because of the fact that many software developers are wielding it for use cases where perfection is not expected, nor even if it is expected will failings here result in the world coming to an end. Here are some examples:

  • The first run-through for M&A due diligence – rather than using NLP tools for providing the absolute last word during a transactional doc review, embrace the fact that this tech can rapidly help to highlight plenty of valuable information, from key clauses to red flags, without any training by the user. That it doesn’t find absolutely everything at once is not an issue, because the use case is all about making a quick analysis that helps to then steer the way the human review takes place.
  • The use of NLP in eDiscovery – given that eDiscovery processes involve vast amounts of documents, texts, emails and other information, no-one is expecting 100% accuracy on the first run-through. But such software, with its ability to focus on concepts and language context rather than just word search, can provide a very useful boost to any eDiscovery project.
  • Where the doc set is highly predictable – while an M&A deal may present a law firm with a very complex set of documents, a CLM platform that is primarily funnelling historical sales contracts and Master Service Agreements through an NLP system will likely be able to find a very high percentage of all the key data the company wants. Moreover, when one considers that previously they may have gathered no info at all on this past doc collection, then what is gained is a major improvement, even when it’s not perfect Also, the doc set here is very narrow and very predictable. Even so, will it miss aspects with very unusual language? Yep, which is why companies are also offering the ability to train the NLP as well to cover such holes. If you can accept these limitations then this is a very handy tool to have. If you think you can chuck anything at it with no training, go totally outside the kinds of docs the system is expecting to see and get 100% accuracy, then you will be disappointed. It’s all about having sensible expectations.
  • NLP and legal research / KM – you can use NLP in legal research and for KM searches, just as you do for transactional doc review. It will gather info not just via word search, but can provide contextual and concept-based searches that work on phrases and sections of language, e.g. how a legal argument is phrased, or what it means, or how a clause is worded or what it refers to. It’s very useful and many companies now leverage it. But, is it 100% accurate? No, it is not. But does it matter? It’s still way better than just using word search on its own. So, again, it’s about accepting the use case. If I use word search and get back 10,000 documents that I then have to trawl through manually, a search using NLP for the concepts I want to find in the docs may give me a few dozen instead. Is that perfect? Nope. But, it’s an improvement. Could it potentially miss a couple of relevant docs because the NLP’s training doesn’t capture everything? Yes, that may happen. But, again, what you are getting is an improvement on word search. And going through the law firm’s millions of docs by hand is just not feasible. So, it’s a win if you can accept the limitations of the technology.

Conclusion

As illustrated, ‘legal AI’ really got off on the wrong foot from day one. It was viewed as a ‘replacement’ – which it has not really become. And it was pitched as some kind of competition centred on accuracy – which is a battle it can’t win (yet).

Maybe one day we will have ‘AI’ that is so smart it can handle everything that is thrown at it with 100% accuracy. But, right now that doesn’t exist. The way ahead – and in fact many companies have just quietly been getting on with this approach – is to provide solutions to use cases where imperfection is acceptable.

Google isn’t perfect, but we all use it. Google also never got into a fight with lawyers about how it could do something better than they could. And there is an important lesson here.

As they say, ‘pick your battles’, or in this scenario ‘pick your use cases’ and by doing so set out the expectations involved. And one might say, rather than agonise over perfection, focus on what can be achieved and how that is better than what came before.

Perfection is for the Gods, let’s find happiness by embracing AI’s limitations.

By Richard Tromans, Founder, Artificial Lawyer, July 2022.