‘We Build AI Agents’ – Flank’s Jake Jones

‘Our ambition is to build an AI colleague – and [for it] to be trusted as one – that’s what keeps me going,’ explains Jake Jones, MD and co-founder of Flank, as he sets out the company’s agent strategy.

The company’s goal is to develop AI agents that ‘resolve requests from the business’, i.e. various parts of a company send their needs in the direction of the inhouse legal team, but those needs are headed off and handled – at least that’s the hope – by the agents that Flank is developing.

This could cover things such as ‘generate documents, provide negotiation support, complete compliance forms and RFPs, and triaging incoming requests’, (see below).

Some agent examples by Flank.

In terms of the big picture, Jones and team see an environment where ‘Flank agents live in the channels you already use to unblock commercial teams autonomously’ and where these agents are ‘designed and trained to resolve requests and take actions exactly like a member of your team’. They will also be ‘steerable, deterministic, and always secure’.

Naturally this is very much LLM-based, and as Jones explains, without generative AI the company would be very different.

How We Got Here

You may have not heard of Flank, which until May this year was called Legal OS. The original company started back in 2017 and until recently worked mostly with law firms.

Jones says the problem was that sometimes law firms bought their previous products, but didn’t actually use them. So, in 2021 they added an inhouse focus, using doc automation and rules-based processes to help them.

He notes that feedback from legal teams was consistent in the message that everyone was working at capacity and there were a dozen day-to-day tasks that were soaking up their time. Someone wanted to get an NDA, someone else wanted to be sure they could sign a deal with another party, someone else needed to handle data security questions.

It all soaks up the inhouse team’s time. If only there were better way….

‘So we started on this, pre-genAI, but it was too unpredictable. Then in late 2022 when genAI came along we saw what we could do. It was magical and it worked immediately,’ Jones remembers.

He also says he saw that now they could potentially achieve their main goal: to remove bottlenecks from within the enterprise.

Making Agents Work

One key aspect of Flank is that although it may sit with the inhouse team, it’s not really for their own use. ‘Flank is not for legal team efficiency, it’s for all the other teams,’ explains Jones.

But, does it help inhouse lawyers? For sure, as it takes away tons of work they would otherwise have to handle. That allows them to focus more on the things where they can add value.

One route an agent may take is as follows, Jones states: ‘The agent sits in the email. A business user emails and says I need an NDA. The agent gathers the info it needs and then deals with it – all via email.’

OK, so that’s the way it flows, but what of the LLMs here?

‘Every task has a fine-tuned LLM selected to review the output and determine if it’s good or not,’ Jones sets out. ‘We have rules that define what good looks like.’

They use Gemini, Sonnet 3.5, and GPT-4o, but Jones observes that some LLMs degrade and that can mean over time they give very different answers. This and the overriding need for accuracy has led Flank to do things like seek a specific response four times in parallel and then choose which one is best as part of the evaluation step.  

Jones also notes that the more complex a task, the worse an LLM is at doing it. Hence it’s better to ask something simple and specific, evaluate it, make sure it’s right, then move to the next task.

Artificial Lawyer then suggests that perhaps we can say that LLMs have better taste than culinary skills. I.e. genAI is better at discerning what ‘tastes’ right, as opposed to what it is actually able to cook.

And this is one of the things about genAI that is so different to previous technologies: it’s full of human-like idiosyncrasies and even experts in the domain find it hard to explain why they are so erratic sometimes. In turn, engineers have to find new ways around these quirks. But, as Jones has found, there are ways to solve these challenges.

The Future

What about tapping legal LLMs? I.e. an LLM that has been specifically trained on legal data – of which there are several now, albeit far smaller than the super-general models of OpenAI and others.

‘It’s too early and it’s so expensive to train an LLM to be good at a range of tasks. I don’t believe one single LLM that is like a lawyer is possible, as there are too many legal tasks and the tasks are very different.’

Jones adds that in terms of their future work it will not be on the LLM side itself, but rather on ‘improving the application layer’, i.e. all the tooling that allows Flank to provide this agentic output.

Conclusion

It’s early days for agents in the legal field, but this site has spoken to plenty of people working in AI, both within legal and in other fields, and the idea of having something that can conduct a series of LLM-connected tasks is a growing goal.

Are there challenges? For sure. As Jones notes, you need to check the responses are correct. Next, how do you then embed this knowledge into the agentic flow so that it’s not lost and can be used on the same task again? There is much for the sector to learn.

But, if those issues can be handled then the use of agents will grow and grow, and Flank is doing some pioneering work.

(Advert)

Legal Innovators UK Conference – November 6 + 7, London.

If the subject of AI agents is of interest then come along to Legal Innovators UK conference in London, Nov 6 and 7, where generative AI’s growing impact on the legal sector will be explored across multiple sessions, with Law Firms the focus of Day One, and Inhouse on Day Two

For more information about speakers and companies taking part, please see here.

And to get your tickets now, please see here.