
Ahead of the launch of LexisNexis’ major release of Lexis+ AI in the UK, Artificial Lawyer spoke in-depth to Jeff Pfeifer, CPO for North America and the UK, about the recent Henchman acquisition, why the new genAI platform is such a big deal, and the fallout from the Stanford HAI study.
The Henchman Deal
First, the purchase of Henchman, the Belgian contract intelligence and drafting startup that is set to become a core part of LexisNexis’s offering. It’s also one of the biggest deals the company has done since they bought CLM business Parley Pro and Pfeifer is keen to explain why it matters.
(Note: Although Lexis is a publicly listed company the sum paid for Henchman has not been made public, as the figure – relative to the multi-billion Dollar giant RELX, which owns Lexis – was not considered as having reached a material threshold.)
So, why the deal with the pioneering 40-plus staff, European company?
‘What interested us in Henchman is their unique approach to indexing and classifying data,’ explains Pfeifer. ‘The way they use semantic clusters to find clauses from a big data set can be used in a lot of product use cases.’
Lawyers have always faced the challenge of finding relevant clauses from their prior work product in often massive and hard to search law firm databases. Henchman helps to solve that problem.
And Pfeifer adds that although the focus now is on integrating the team and bringing Henchman’s capabilities into how Lexis helps lawyers draft contracts, the goal is to expand this clustering and search ability that links to a DMS to litigation use cases as well. Plus, Henchman has been made to integrate closely with Word, where lawyers live. So, the use cases will likely keep on coming. And of course, it will be ‘plugged into Lexis+ AI’. Plus, they will be integrating Henchman with Parley Pro to support the development of the company’s own CLM offering.
Pfeifer also notes that the system, which uses a mix of technologies ‘is language agnostic’ and lawyers can use their own data [to find related clauses] or use Lexis data.’ So users have a lot of choice.
In short, Henchman provides a very useful capability for anyone drafting a legal document, and Lexis is pushing more and more into the transactional and drafting space, so this makes total sense. And as noted, it will in time be used to help on the disputes side as well.
We close off the Henchman discussion with Pfeifer stating that all the senior team there will be joining Lexis and working on the journey ahead.
Lexis+ AI – And Why It’s a Big Deal
Pfeifer notes that launching the multi-faceted generative AI capability in the UK has demanded a lot of work. But, it’s paying off already. The company has also ‘never seen a product sell so fast [and] pre-purchased before sale’, he notes.
Naturally, the case law side of things is a very different set of data to the US version. On the contract side of things drafting commercial deals here is quite different to how they do things in America. In short, this was not a simple case of just connecting the US system to a new market, this has needed a lot of adapting to the UK legal ecosystem – but now it’s live.
As noted in the earlier piece (link), Lexis+ AI offers a wide range of use cases, with the idea that it becomes part of any lawyers’ daily toolkit.
They’ve put a lot into this and it’s probably fair to say that the future direction of Lexis’s relationship with its legal sector customers is going to be increasingly about a generative AI interface linked to all their data, their transactional tools, and the workflows they’ve developed or acquired to help lawyers do their work – from CLM needs, to case law research, to drafting. In short, generative AI is not just an add-on here, it’s literally the future of the company and will be at its core. Plus, this is just the beginning.
After the UK roll-out is Canada, and then other markets too. In short, eventually the whole world of LexisNexis customers will get Lexis+ AI.
On the global reach point, Pfeifer noted that it’s been fascinating to see how lawyers in different countries frame their questions, which also then means that Lexis+ AI has to be adaptable to that in terms of how the generative AI copes with different approaches.
Pfeifer is clearly excited by the roll-outs around the world and what they are learning as the genAI project continues. The company is in both product delivery mode and frontier exploration mode at the same time.
The Stanford HAI Study
As readers will know, a Stanford HAI study last month looked at Lexis’s and Thomson Reuters’ genAI tools for case law research. Neither came out looking perfect from what has been a challenging study that has its own questions to answer. But, Lexis was perhaps spared some of the harsher conclusions. What does Pfeifer make of it all?
First he notes that Stanford and Lexis clearly took a different approach to the questions asked and the way responses were judged when testing. Second, Stanford tested the first generation of their Lexis+ AI capability, and now they are already onto the second gen version, with plenty of ongoing work happening to continually improve accuracy. He also adds that so far from what they have seen hallucinations within their system are ‘usually in low single digits at most’ – i.e. a lot lower than the study suggested.
‘We work to make sure results are acceptable. We score our results by answer usefulness,’ Pfeifer says. ‘Even if there is some inaccuracy, customers can still find it useful.’
‘[Some of] the Stanford HAI questions, we have not seen those from our customers,’ he added, and that’s something AL noted early on in the debate about the study, i.e. does having some questions that are designed to confuse the genAI system really provide a useful picture of how lawyers conduct case law research? And if you base the percentage success rates partly on those deliberately false questions, does that give a realistic measure of a search product?
Pfeifer also notes that how you define an hallucination is open to different views. There are answers that are partly wrong, but part right, and there are answers that are totally off.
In short, Stanford and Lexis are analysing results quite differently. He also stresses that although the study raised doubts about RAG, or retrieval augmented generation, i.e. a way to check answers are right by finding factual data to back up a response, this approach should not be dismissed.
‘RAG is not just search and retrieval. We have included 10 services at the same time, that check intent, that check the user made the right query, that do citation checking, that look at meta data. There are quite a few services that run concurrently, so we have a connected graph [that supports the RAG process],’ he says and adds that he feels very strongly about not dismissing RAG’s ability to ensure accuracy.
They will add more ways to make RAG even better, he adds. In short, RAG can do the job, it’s just a question of adding enough methodologies for proving correctness to remove hallucinations and errors. They also use nine different LLMs with Lexis+ AI, not just GPT-4, but Mistral and Anthropic models also, and this ensures the right LLM is used for each task.
‘These are the earliest days [of genAI and RAG]. We are seeing an exponential improvement of product quarter by quarter,’ he stressed. ‘It’s astonishing.’
He adds that Lexis would be happy to work with other parties, including Thomson Reuters, to help develop benchmarks that the sector can agree on for measuring accuracy of genAI tools. And clearly we need something that parties can agree on here.
Conclusion: Why GenAI Matters
‘LLMs are essential [now] to LexisNexis,’ Pfeifer explains and adds that because of their capabilities they are going to be a massive part of the legal tech world.
He points out that in the past NLP machine learning systems needed a lot of infrastructure that you had to build. Now with LLMs, plus AWS or Azure, you ‘can focus on your use cases’.
‘The infrastructure [for LLMs] is lightyears ahead of what we had before,’ he says. ‘There is a rising tide that affects all products. This is a step change.’
He adds that Microsoft’s Co-Pilot has been very welcome, because it has helped a lot of lawyers get used to generative AI and how to prompt. In turn that helps Lexis as well. And that kind of sector-wide buy-in, and so fast, just didn’t happen at scale with earlier legal AI tools.
Finally, where is this heading? Pfeifer concludes that the direction is clear, more use of LLMs and in legal especially.
‘Other industries are amazed at the pace of adoption of AI in legal!’ he states.
Moreover, as Pfeifer says, we really are at the beginning still. There is a lot more to come and Lexis aims to be right at the centre of it.