The OpenAI o1 release, which marks the beginning of a new wave of generative AI models with far greater reasoning capabilities, looks set to have a profound impact on legal AI generally and make agentic flows far more capable.
As OpenAI announced: ‘We’ve developed a new series of AI models (see here) designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models.’
And what do lawyers do all day? They read stuff and they write things, but that’s not what they’re really paid for. One of their core values is reasoning – and very complex reasoning at that. Hence, if genAI can reason far better it will provide much more value and have much more impact in the legal domain.
Another aspect is improving agentic workflows, i.e. where the LLM is given a series of tasks, perhaps via a single prompt, and the ‘agent’ gets to work until the job is completed. Fundamental to this is reasoning – or rather, reasoning would greatly improve what it can do. An agent that just ‘ploughs ahead’ is not much use if it encounters a problem it doesn’t ‘understand’, one that can pause and reason, then act could be very valuable in the legal sector.
Legal tech companies are already impressed with what OpenAI o1 can do.
Scott Stevenson, CEO of Spellbook, told this site: ‘It is ridiculously good. One of the things it is going to be really good at is document revision (e.g. many edits across a document based on instructions). It was very hard to do that well using GPT-4.’
Meanwhile, Harvey stated that: ‘This new model offers substantial improvements on Harvey’s leading indicators of model performance. We’re thrilled to start building the next generation of legal agents and workflows with this new series of models.’
Now, about those agents. As Harvey points out in their blog on OpenAI o1, reasoning along clear objective lines is great, but…..the law is contextual and subjective. How do you leverage this reasoning capability – perhaps within an agentic flow – if you have to engage with this subjectivity?
Harvey believes they have a solution (see here).
They explain: ‘In particular, OpenAI o1’s reasoning capabilities are optimized for largely deterministic problems, like those presented in coding and math.
‘Law and professional services, in contrast, generally present problems deeply rooted in context – where most reasoning requires considering multiple sides of a problem and making judgment calls that draw on subjective context.
‘A more generalist model can struggle to plan and reason in these contexts, thinking linearly and myopically about solutions which can lead it even further astray than prior, less thoughtful models.
‘To solve this problem, Harvey’s legal and ML research teams [will] work closely, both in-house and with partners at OpenAI to identify ways to align models for these domain-specific reasoning problems. From identifying and curating relevant datasets, to generating novel forms of human data, these teams work interdisciplinarily to ensure models think about and solve problems the same way lawyers do.’
In short, they will refine the way the new range of o1 models reason so they handle the contextual subtleties experienced in legal work.
And, if you can do that, then you can build even more effective agents that don’t get stumped when they encounter a problem, but can then reason around it and continue. And that would be a major boost to efficiency in the legal domain.
–
So, there you go. It’s not the ‘God LLM’ some dream of, that solves everything all at once, but it looks like a solid step forward, especially in areas where legal tech companies are very excited to explore, i.e. the agentic aspect, or in more prosaic language: the ability to do a bunch of things one after the other without you having to tell it what to do. In short, it’s like having a really, really good associate working for you. Or, at least that’s the plan.
Plus, just consider the speed of change here.
ChatGPT arrived in November 2022. That is less than two years ago.
We are now at the point of rapidly improving models with – it is claimed – much better reasoning.
In another two years, where will we be?
Are law firms preparing realistically for this? Are the clients getting to grips with this as well?
Consider this: a junior associate joining a firm this autumn may spend let’s say eight-plus years before they get up into the junior partner ranks – if they are lucky and that’s the path they want. Now, consider where genAI and legal tech will be in 2032.
Yep. Makes you wonder what we will be able to achieve. Exciting times to be writing about legal AI.