LawGeex and The AI Gap

Natural language processing (NLP) is a wonderful tool, but it has its limitations – something that LawGeex found out eventually after launching one of the first ‘legal AI’ companies. The answer was to put human lawyers in the loop.

As LawGeex VP of Marketing, Audelia Boker, at the Israel-based contract review company, explained to Artificial Lawyer: ‘Without accuracy you don’t have a solution. It’s table stakes.’

‘LawGeex went through that journey. We started out as AI only, but saw that the tech was not mature enough. We took it upon ourselves to provide an end to end solution. And so we have lawyers [on staff].’

This matters especially for LawGeex as they are providing a contract review service, often to corporates. It has to be right. It also will often follow a specific playbook provided by the client that needs to be followed all the way through. Getting things 80% right and then handing back the work in an unfinished state won’t be enough – unless the client is going to finish off the job itself.

And, it’s worth pointing out that this is unlike the NLP use cases in other areas of the law. In legal research, using NLP is an improvement on key word search to drag up relevant files, the same goes for KM for trawling through your DMS – 100% accuracy would be nice, but it’s not a killer issue if you just get 90% of the relevant information back, especially when your normal research methods using key words may only hit 60%, for example.

But, if a business asks you to look at a contract for them, to analyse it, and highlight what may be a legal issue and suggest changes that meet very specific rules, then it has to be right all the way through.

This is where LawGeex found itself. In short, there was an ‘AI gap’ – the NLP software, even though over the years it had been very well trained, was just not getting all the way there.

Noory Bechor, CEO and cofounder, continued: ‘There is a gap between where the tech is and where we are, both a data gap and an algorithm gap.’

He noted that it was around 2019 when they made a shift to bringing in more people to help with Quality Assurance (QA).

‘Clients say: if we delegate the work to you then we expect you to take it all the way,’ he added.

Now, clients could take back the unfinished review and finish it themselves, a bit like buying Ikea furniture and doing the final assembly work at home, but that then undermines your value as a service. Understandably, corporates, which are in effect outsourcing some of their contract review work to a third party, just want it done, just as they would if they sent that work to a law firm, or an ALSP.

And here is where it gets really interesting. For the client it may not matter how we get there, maybe it’s five lawyers and a little bit of tech inside a law firm working on their outsourced review work, maybe it’s a lot of tech and two lawyers inside ‘a legal tech company’. What matters is the end result.

The question is then: how big is the gap between what the AI can do and the end result that is needed?

Bechor explained that this is very much a question of ‘it depends’. There are some cases that are very bespoke, others that are ‘normal’ that their NLP has seen many, many times before.

However, he noted that they can get to a point of about 90% automation and 10% human review to provide the quality assurance the client wants. Other times the mix involves a lot more human input.

Bechor also added that as they work with a particular client and its playbooks they get better and better – and that is natural given how the NLP can be trained. And this leads to the next step, which is now giving advice to the clients on how to improve their playbooks, something that is regarded as regulated legal activity in the US.

As highlighted by this site, LawGeex applied to the Utah Regulatory Sandbox, which allows legal businesses to operate there that are not wholly owned by lawyers, and it was accepted. This has allowed them to use their legal team in Utah – which was already there – to not just do quality assurance, but to provide advice to clients – across the US – on how to improve their playbooks. (See story.)

In short, they are now offering even more value via the human component of the company. Artificial Lawyer looked at this growth of the human lawyer input and saw this as an ALSP – i.e. this is legal services: reviewing documents for a company and also providing legal input on how to improve their playbooks. But, it’s not a traditional law firm.

In fact, one could see any tech company providing a direct legal service to clients and that is using lawyers to ‘close the AI gap’ as an ALSP, not just LawGeex. I.e. a client is outsourcing ‘legal work’ to a third party, (which may be tech-heavy or not, and perhaps the client doesn’t care as long as that work is done well by this ‘provider’.)

Bechor doesn’t like this term to describe LawGeex, as from his point of view they’re a tech company first and then comes some human input. But, if we look at outputs rather than how the work gets done, then they really are now an alternative to a traditional legal service.

I.e. LawGeex is not trying to be a law firm, alternative or otherwise, it wants to be a tech company, but in closing the AI gap with human lawyers to provide quality control, and now expanding the role of their experts to hand out advice, the end result can be viewed as an ALSP of sorts.

(One other point Bechor noted is that ALSPs are not always giving legal advice, even if they might do contract review work. But, in some markets ALSPs do give legal advice as well, such as in the UK. Moreover, as noted, the key point here is that they’re providing a service that normally a law firm would provide, i.e. this is an alternative service. Any road, back to the gap.)

So, what’s next? Bechor explained that for now, this is the model they are staying with. It’s like the difference between a Tesla on cruise control and a car that can really go from A to B with zero driver input.

‘We are at the interim stage,’ he concluded.

Maybe one day NLP tools will be advanced enough to close this AI gap, but we are not there yet. And so, we will be relying on human input for some years to come. And perhaps….who knows….maybe we’ll always need some human input to be sure and to satisfy the client, no matter how advanced things get?