
Harvey has announced that ‘next-generation agents’ will now be available on its platform, which it defines as ‘systems that can plan, adapt, and meaningfully interact with humans to complete a task’. They will stretch across transactional, litigation, and financial services needs.
They’re also developing ways to evaluate the agents’ performance against human lawyers across a range of tasks (see below). They added that this is all very much being led by the growth of LLM-based reasoning models.
The US-based genAI company, which recently received $300m in fresh funding, explained that for them ‘AI agents’ – a rapidly evolving area where there is still some blurriness when it comes to definitions – comprise of three key parts:
- ‘Plan: the ability to break down complex tasks into a set of steps required to solve them.
- Adapt: the ability to use results of actions to change a plan or make steps within that plan more effective.
- Interact: the ability to solicit and incorporate input from humans, model systems, or other agents during the execution of a task.’
They added that this new functionality will operate via its ‘workflows’, and consist of ‘one or more agents that collectively produce a meaningful, specific work product. They define an overall goal, from reviewing a filing, to drafting, and use a set of interfaces to enable human and agent interactions to reach that goal’. (See image below – note the ‘N-times’ aspect, i.e. the agent will keep performing the set tasks.)
And for clarity they stated that: ‘When multiple models are combined with task-specific tools and knowledge sourcess, such as search tools, RAG databases, or other function calls, we describe that as a model system.’
Or more succinctly, they noted: ‘Agents are the means; workflows are the ends.’

These agentic workflows have several core aspects, which Harvey states include the following:
- ‘Personalized: Specific workflows are recommended to users based on their area of expertise, creating tailored suggestions that make task execution more efficient.
- Guided: Rather than requiring users to craft detailed queries, Harvey agents actively guide them through each step of a task. By proactively asking for necessary context, workflows remove the burden of user prompting from the equation.
- Transparent: Assistant workflows introduce “thinking states,” providing users with visibility into how decisions are made and task execution is progressing. This level of transparency allows for greater understanding of Harvey’s reasoning and methodology under the hood.
- Expert Quality: Each workflow is optimized for specific use cases, leveraging domain-specific AI models and bespoke citation requirements to ensure even greater accuracy. By using the best-suited models and following a structured set of steps for each task, workflows generate professional-class work product out of the box.’
Human vs Agent Evaluation
Moreover, to ensure accuracy and performance, the company is ‘introducing custom evaluations to compare workflow outcomes directly against human lawyers’. I.e. Harvey will check that these automated programs will conduct legal tasks as well as a human lawyer would.
‘These evaluations aim to establish whether workflows produce human-quality work on common tasks typically requiring an hour or more of lawyer time,’ they explained.
The initial benchmarks cover three task types: (1) structured drafting, (2) unstructured drafting and analysis, and (3) data extraction and structuring.
‘On each of these, Harvey workflows perform at or above human lawyer-level and save substantial time on common, high-value tasks,’ they said.

So, is this a big deal?
As mentioned before by AL, the main benefit from agents is not so much that they’re tapping genAI – as you can do that already via a series of prompts that you directly input – but that these programs can operate ‘chained tasks’, even if there are some human-in-the-loop checkpoints to ensure their correct operation.
A chain of tasks that result in something you want, e.g. a drafted document, based on a series of instructions and/or iterations – naturally saves time.
And time-saving is at the centre of all of this, which in turn comes back to the point that efficiency gains and time-based billing for less complex work are like matter and anti-matter, i.e. they are in direct conflict. Of course, for inhouse lawyers using agents there should be no such barrier. While for law firms that can segment out the tasks that are heavily automated and place a fixed fee on them, then they can be in a happy place also.
More broadly, as this site has covered, AI agents – i.e. chained tasks with some reasoning often as part of this process – are rapidly rolling out across the legal tech market, from giant providers, to super-specialised startups focused on just one area, such as PE fund formation, or patent due diligence.
Will they catch on? Well, the key point has been made above: if you have a piece of tech that gets you to ‘X’ faster, but your internal business model is diametrically opposed to such efficiency, then the benefits are limited – unless these agents are only applied to non-billable work.
So, as is often the case, it’s not just whether the tech works well, but whether there is a business model that can support the true deployment at scale of such tech. One might say that the biggest barrier to agents taking off is now ‘time’.
—
Advertisement: Legal Innovators California conference, San Francisco, June 11 + 12

More information and tickets here.