Generative Legal AI + ‘The Last Human Mile’

There has been a surge of interest in what generative AI can do. But what does this technology really mean for the legal sector? To find out we must navigate a path between ‘Death of the Lawyer 2.0’ hysteria and those who dismiss the whole thing as a gimmick. Artificial Lawyer looks at what this tech can really do.

What It Can Do

Generative AI (gen AI), working via Large Language Models such as OpenAI’s GPT-3, can do some amazing things. It can create a passable dialogue between your two favourite movie characters, it can write fairly average song lyrics and poems, or a ‘by the numbers’ press release, it can summarize a play by Shakespeare in a rather dry way, it can even answer straight forward factual questions including about legislation and regulation, and it can add to text that already exists or create whole documents, e.g. an NDA – all based on having absorbed and ‘catalogued’, to put it simply, a huge quantity of human text output, and where it then provides the ability to go back to this ‘catalogue’ and source as best it can what seems to be the right collection of text to answer the command or question.

Putting aside the privacy issues of feeding legal documents and questions into a third-party system, let’s look at what this can do for the legal world. And it’s worth saying that Artificial Lawyer has already had an in-depth look at one gen AI tool, Spellbook by Rally (see here), and seen early-stage examples of what Harvey can do, which is similar to Spellbook and has received $5 million in funding led by the OpenAI Startup Fund.

We’ll also consider the pros and cons of the below abilities, which are in effect operating as if you’d bumped into an incredibly clever stranger on a train who wanted to regale you with how smart they were and how amazing their photographic memory was, but after a while you realised didn’t really have any depth to their performances, even if their propensity for mimicking others and producing seemingly accurate copies of people, collections of facts, even sections of documents, is at first amazing.

In short, what this tech can do is incredible, but it’s also ‘missing a bit’, it’s ‘not all there’, or perhaps most accurately of all: there’s no-one home. And that’s because there is no-one there, no person, no thought, no intuition, no mind, no character, no soul – just a giant reference library run by a set of algorithms whose job is to find things that appear to match and then type them out for you to enjoy, whether those things are factual or iterations of earlier creative output.

Sometimes it is also outright dangerous in its inaccuracies, especially when it gets very close to the truth in some areas, but just misses in others. I.e. by giving the impression of knowing everything you get lulled into accepting everything it offers up, even when it’s wrong.

It’s a bit like when someone is telling you a story which at first sounds reasonable, then as they go on you realise little errors are creeping in that give it all away. Then you wonder which parts are right and which parts are totally wrong. As the provider of the information cannot reassure you, there is no choice but to go elsewhere for a more trusted output.

And here are now some of the things gen AI can do:

Q&A – This is perhaps the simplest use case. This is matching a question to what seems to be the most appropriate answer. What is special here is the way OpenAI’s various tools, for example, handle complex and multi-clause questions, e.g. Can a landlord in Scotland terminate your lease if you cannot pay, and if you are signed off on medical leave?

The problem here is whether the answer it gives is correct? OpenAI’s tools are by their nature generalist, trained on massive collections of text found on the web. Who knows if the answer you get is right? In reality what will most likely happen is that big brands, such as Thomson Reuters and LexisNexis, and perhaps some new challenger brands, will use the tech and adapt it to information that they can 100% vouch for – and then offer the same kind of legal research opportunities they do now, but with this generative AI approach.

They already did this when the first wave of NLP-driven legal AI companies looked like they might upset the legal research apple cart in the mid 2010s. Eg. LexisNexis bought Ravel and Lex Machina, while TR built their NLP capabilities internally. But, they responded eventually after seeing they faced serious competition from companies using new technology.  

Summary – The summary ability, in this case for a contract, can be seen with Spellbook (see above). Artificial Lawyer has seen this at work and can vouch for the fact that it does work. However, the question is: how well does it work? It can also summarise the text as if for a 12 year-old, for example. This is very handy for those focused on clear English and readability.

But, back to the key question: even if software can summarize, has it done it correctly? What if a key clause that really matters has been missed out of this boiled down version of a contract? Well, then you have a summary that’s basically a risk – because you have told yourself you now know what’s in the contract when in fact you don’t.

Will this ability get better? Yes. Machine learning gets better. Will there always be a risk you get a partial summary and a key point is lost? Yes.

Which leads to the central conclusion (see below) about ‘the last human mile’. More in a moment on that.

Text Completion – Again, Spellbook does this very well. It reads the contract and provides new clauses that seem to match the flow of the document. E.g. a section on indemnities will be extended to include clauses on other aspects of indemnities that the system has in its ‘catalogue’ but which you have not already added to the contract.

Will this be accurate? Correct? Needed? Well, this is again the last human mile issue. Just bunging in some new text that could be useful is not the same as adding a clause that you as a lawyer really know the client needs for this contract – and is totally fitting and legally solid.

‘Auto-complete’ can act as a reminder, a nudge, a prompt, a suggestion – but to just take it and accept it strikes this site as reckless to say the least. Contracts are there to help a client achieve a very specific business aim – just tacking on generic clauses is not something that will help, and could well be a risk.

So, users have to be certain how and why they are using the suggestions. They can be amazingly helpful, but you need a massive pinch of salt with it all.

Whole Document Creation – making a whole document, if one is in the catalogue, is also possible. The problem here is if you are making X contract for Y client, then why not just use a template that fits and then adapt it as needed? Asking a generic catalogue to auto-produce the whole thing for you is clearly reckless because you have too little control over it.

One way forward is to have perhaps a complex set of Q&As to answer first that then allows the system to pick the right language, perhaps delving into your own templates and precedents to get the material for the contract. That could perhaps work and would remove the risk. It’s a bit like a merger of gen AI with something like the contract atomisation ability of Law Machine, plus a more regular doc auto template.

Either way, the key issue is having something which super-confidently chucks a new document at you and says: ‘That’s it, trust me.’

And again, lawyers will want to be sure, (as we all should want to be).

Because something looks right, doesn’t mean it is right.

The Last Human Mile

The last mile problem is a well-developed theory that many systems fail because there are some key steps at the end that cannot be done properly and that ruins the whole thing. We can extend this to ‘the last human mile’, i.e. that you need a human in the loop when we get into areas of highly complex unstructured data and where you need to trust that there is an actual expert there who is alive to all the needs and risks and the very real human intricacies of the situation.

The same is true with doc auto, for example. Just throwing a template at a legal issue may work, if the issue is very clear, is totally understood, and the template really does fit the need. But, as soon as you leave these ‘train tracks’ you are in trouble.

That’s the problem with real life. It’s complex, subjective, always changing, always involving a framework that is evolving around it.

Then there is the ability to really analyse a situation and understand the bespoke nature of it. The law is not code, as this site has explored before. It’s blurry around the edges, and good lawyers with experience know how to help a client in these areas. They also know how to negotiate a good position, or know when to be intentionally vague. In short, making a contract for a complex need is equally complex work. A generic slice of text will not suffice. You need the human.

Conclusion

What this tech can do is incredible and it’s not just a gimmick. There are some really useful things it can deliver. But, equally, to believe this is ‘the end of lawyers’ all over again would be misguided – in fact, trusting some of the above capabilities without a human lawyer heavily involved would be dangerous. That risk can be moderated with additional development work, for sure, e.g. getting the gen AI to work off your own collection of templates and organised legal knowledge, plus more solutions will come and they will get better. But, for now, it’s sensible to not get carried away and certainly not believe that generic answers are going to provide legal confidence to anyone.

So, let’s celebrate what can be done here and encourage companies like Spellbook and Harvey to keep exploring what can be achieved, but also, let’s not – as always with AI – get carried away. The last human mile is going to be with us for a long time to come – and I’d argue, will always be there.

P.S. this article was not written by generative AI.

Richard Tromans, Founder, Artificial Lawyer, Dec 2022.

1 Comment

  1. On the Q&A function, I wonder whether it could support the answer with the source data a la academic papers – that at least would give you the opportunity to verify its conclusion

1 Trackback / Pingback

  1. Thoughts on AI and "The Last Human Mile" - Adams on Contract Drafting

Comments are closed.