Context is More Important than Compute for Legal AI

By Ed Walters, VP of Legal Innovation + Strategy, Clio.

Law firms and legal tech companies spent a lot of time in 2025 researching and updating to the newest AI foundation models from OpenAI, Google, Anthropic, and others, trying to squeeze out slightly better reasoning or writing capability. This is understandable – everyone wants better capabilities than competitors. Although AI tech companies will continue to race to the newest models in 2026, legal tech companies will branch off in a different race: the race to complete context.

As foundation models get more capable at more complex tasks, the real differentiator for legal AI will be context. That might be the context of grounding in authoritative, current, global law. Or it might be the context provided by expert legal architects. It could be background information about the client, or the specific matter at hand. Or it could be hand-picked template libraries, checklists, and playbooks from a document management archive. But total context probably requires some combination of all of these.

The second major differentiator between legal AI and foundation models in 2026 will be the centrality of trust for legal AI. It will not be enough for legal AI to be grounded in law. Law firms and their clients will demand transparency and an audit trail for legal work, to build trust in the veracity of the underlying materials. After all, if we can’t see, verify, and trust the underlying legal grounding, how can we trust the analysis of AI tools?

Human verification will remain an important requirement of legal AI. That means that AI systems must be constructed with transparency at the forefront, what we sometimes call an ‘architecture of trust.’

This is where legal begins to diverge from the broader AI market. While general-purpose models optimize for fluency and broad knowledge, legal work demands something different: transparency, grounding, auditability, and context: an architecture of trust, not an architecture of skills.

The Hidden Problem with General-Purpose Data

General-purpose AI models are trained on massive datasets scraped from the internet. That approach works well for many applications. If you need to summarize an article, generate marketing copy, or answer general knowledge questions, these models perform impressively.

But legal work is not general knowledge work. A case citation is not just text to be parsed. It sits within a hierarchy of authority. Its meaning depends on jurisdiction, how courts have treated it over time, and how it interacts with statutes and other precedent. Strip away that information infrastructure to treat legal materials as simple probabilistic text, and you lose the very thing that makes legal reasoning coherent.

That is why so many firms report disappointment from trials of AI foundation models. A partner tests a research tool, gets an answer that looks polished, and then discovers the citations do not exist, or a foundation model (very convincingly!) misstates the law. An associate tries a general-purpose model and ends up spending more time verifying the output than if they had done the work manually. The problem is not that ungrounded AI lacks capability. The problem is that it lacks the structured legal context required to reason accurately within the law.

Context as the Foundation for Trust

At ClioCon 2025, Jack Newton described a new model for legal AI, what he called ‘context engineering.’ He explained: ‘Context engineering is about giving AI the same complete picture you’re carrying around in your head, so it doesn’t need to interpret text in an island, but it grasps meaning and relationships and intent, because it has all the context necessary to make the right conclusions.’

CEO, Jack Newton, at ClioCon talking about context engineering.

Jack’s concept of context engineering is an important departure from adding compute to try to improve AI. Context in legal work is not an enhancement. It is foundational infrastructure for legal tasks. A contract clause means something different depending on the jurisdiction, the parties involved, the deal structure, and dozens of other factors that exist outside the text itself. A motion has weight because of the procedural posture, the judge’s tendencies, and the specific facts of the case. Legal reasoning depends on these relationships, and AI that cannot access or understand them will always produce unreliable results.

This is where the distinction between legal data and general-purpose data becomes decisive. Legal data is structured to preserve these relationships. It connects cases to their treatment history, statutes to their amendments, and rules to their jurisdictional boundaries. General-purpose data, by contrast, treats law as just another corpus of text to be indexed and retrieved.

The difference shows up in the outputs. AI built on structured legal data can trace its reasoning back to authoritative sources. It can distinguish between binding precedent and persuasive authority, between text in the majority opinion and similar-looking text in a dissent. It can flag when a case has been overturned or when a statute has been amended. AI trained on generic internet data cannot reliably do any of these things, no matter how sophisticated the underlying model might be.

Context, Mental Health, and Productivity

Lawyers are extraordinary professionals: creative, analytical, and diligent. But legal work requires lawyers to maintain context across thousands of documents on dozens of systems across multiple matters at once. They manage file folders and DMS systems, draft documents, emails, voicemails, texts, calendars, interviews, depositions, billing systems, diligence items, and war rooms.

Managing context in a lawyer’s brain is such an easy way to make mistakes. It increases the cognitive load of busy professionals working on very tight deadlines with high stress and high stakes. Research shows that switching between matters and systems requires that professionals take more than 9 minutes to regain focus. Another study from ALM Intelligence and Law.com Compass found that workers spend four hours per week just transferring this context between different software tools. Collectively, this can add up to more than 5 weeks per year of time lost to switching context.

But the good news is that comprehensive AI platforms can consolidate much of that work, in a single architecture that maintains context for lawyers. In 2025, Clio worked with Neuro-Insight, a leading neuroanalytics researcher, to conduct a neurological study of legal professionals. The result: lawyers using a single software platform to perform these tasks reported a 25 percent decrease in their overall cognitive load.

Context engineering can improve chronic burnout among lawyers, produce better, richer legal work, and reduce billable hours lost to context switching. It’s no wonder that context is such an important frontier for legal AI.

Building on the Right Foundation

This is not a technical argument. It is a practical one. Firms want tools they can rely on, and reliability in legal AI depends on the quality and structure of the underlying data. Models trained on broad internet datasets may write fluently, but they struggle with the precision and traceability that matter most in legal work.

The legal profession has always been conservative about adopting new technology, and for good reason. The cost of error is high, and trust is hard to rebuild once lost. As AI becomes more central to legal practice, the firms that succeed will be those that insist on systems built for legal work, not systems adapted to it.

But legal AI tools that are grounded in law, built for easy verification, and that draw from rich context, will transform legal work for the people who use it. Context engineering with comprehensive legal AI platforms will both manage burnout and reduce billable time lost to switching back and forth among 39 open tabs in a browser.

This is an architecture of trust, and it’s where legal AI is departing from foundation models. Foundation models will undoubtedly create amazing breakthroughs in LLMs in 2026, as they add vastly more data centers and computing resources to improve the skills of their toolset. When those tools get better, they will make legal AI tools more capable at the same time. But instead of investing in compute, legal AI in 2026 will see major investments in context.

To learn more about what Clio can do for you, please see here.

About the author: Ed Walters is the VP of Legal Innovation + Strategy at Clio and one of the original legal tech entrepreneurs. He was the co-founder of Fastcase, a legal publishing company that now serves as the data foundation for Clio Work and Vincent AI.

Ed worked to create a new generation of legal AI tools in his prior role as Chief Strategy Officer at vLex, forging a key partnership with OpenAI and building Vincent AI. He holds nine patents for legal tech innovations.

[ This is a sponsored thought leadership article by Clio for Artificial Lawyer. ]


Discover more from Artificial Lawyer

Subscribe to get the latest posts sent to your email.