In this second part of the discussion on generative AI and the legal sector, law professor and legal AI expert, Dan Katz, talks to Artificial Lawyer about some of the deeper aspects of what the current tech shift means for lawyers. We cover how LLMs can supercharge ALSPs, how to stop generative AI hallucinations and data leakage, the end of ‘traditional’ doc automation, and more.
The first video can be found – here. To watch this Part 2 video, please press play.
Some of the issues we explore:
- Privacy and data control – how do you do this? Much depends on risk appetite. Need a moderation layer and an audit log, can also have differential privacy – so redact aspects that have been imported – strips out identifiers. But may still need people to check this.
- ChatGPT’s release led to us all taking part in the largest data annotation exercise in human history.
- We will start to see people build many more LLMs – as unit economics is dropping every day. GPU cost is dropping, and also once you make a model you can compress it, which reduces the cost again.
- Things get cheaper. People are so focused on these problems such as to optimise LLMs. New chips are coming too. (RT – reminds me of Moore’s Law).
- Also will see what Bloomberg did – they bought own GPUs and built own propriety LLM. Bard will get better. There is DeepMind, Amazon, Anthropic, many others. And also will be domain models.
- Is this the MySpace era all over again? I.e. it’s so early in this new period of tech change it’s very hard to know where things will go. Dan says: ‘A lot of what we have now (re. LLMs) will be gone in two years.’
- OpenAI got a jump on the market, but this is not over.
- M&A (as noted previously) will grow because companies that entered market first educated it – but then other waves will follow. We saw this with legal tech. Buyers normalise things, they get more sophisticated. Then new waves of products arrive. This may drive legal tech M&A.
- Would you agree that ChatGPT’s hallucinations made a big mess and that now everyone in the generative AI field is having to clean that up so people trust this technology more?
- Dan says that GPT-4 is much better, but people need to stop thinking that ‘zero shot’ approaches will be sufficient. Need a sequential approach to using generative AI. (RT – reminds me of good old-fashioned user training of NLP.)
- Can we get to zero hallucinations? Being wrong but with the absolute confidence you are right is very risky. Dan says: ‘There is no system with a zero-error rate. But you can reduce it, yes.’
- And already has been done. Can also add a retrieval log to make sure.
- The ultimate answer will be the growth of an LLM ecosystem of tools to help validate the outputs. There will be plugins that can help. E.g. you can connect to Wolfram, for example for math. (RT I.e. the solution will be building layers and feedback correction loops into the process, perhaps via other apps and APIs.)
- Also, Dan notes: ‘The law does not have a zero error rate either.’
- So, what will change most? KM will be a key area for change. Due Diligence, less so, but still opportunities to go deeper / further using this tech in that area.
- Is doc automation dead? Hard to see how doc automation can survive in the traditional way. How survive? It seems it’s existential. It doesn’t replace the entire tool. But if two competitors then customer will go to one with LLM capability.
- As to companies adapting……first many legal tech companies have ignored LLMs, then they’ve said they can do it themselves (e.g. relatively quick and easy integrations with OpenAI), and then they will realise they have to do something much more significant than that.
- What happens to ALSPs? Does generative AI transform them or kill them? Can the ALSP model come of age by utilising LLMs? (Re. contract review / complex work at scale approaches, naturally lawyers-on-demand won’t be affected in same way by LLMs.)
- Dan says: ‘ALSPs have a big pile of work that is amenable to these tools. They are focused on a steady stream of similar work and you get economies of scale.
- ‘ALSPs used to be body shops, and similar to law firms, but have some automation. Can they make automation a priority? Private Equity can help here. Help with moving one model into another. ALSPs are in a nice position to get the most out of AI tools.’
- And we also discuss Dan’s new LLM-connected legal data project, that he’s doing with Michael Bommarito and others.
- Conclusion: it’s another AI summer for legal – (and RT – this time it’s going to last….!)
There is a lot here and much to explore around these topics. If you found this of interest, want to know more about this field and also share your own insights into it, then come along to the Legal Innovators California Conference in San Francisco on June 7 / 8. More information and tickets here.
We will be talking about plenty of other issues, from where legal ops is now, to the evolution of ALSPs, to the growing role of the legal innovation team inside law firms. Generative AI will no doubt find its way into many of the panels and keynotes across both days as it’s all part of an increasingly vibrant and expanding legal innovation ecosystem.
Look forward to seeing you all in San Francisco, where we can explore these topics together, hear from many experts in the field, and celebrate our growing sector!
Richard Tromans, Founder of Artificial Lawyer and Conference Chair.