By Electra Japonas, CPO, SimpleDocs.
There is a pattern emerging in how legal teams are experiencing AI tools, and it is worth naming directly.
The output is fluent. The speed is real. And yet something is consistently off. Positions that feel commercially tone-deaf. Suggested language that no reasonable counterparty would accept. Redlines that miss the point of what the organisation actually cares about.
The instinct is to blame the model. To assume the technology is not yet good enough, or that legal work is simply too nuanced for AI to support in any meaningful way. Neither explanation is quite right.
The problem is not intelligence. It is context.
What AI Actually Needs
When a lawyer reviews a contract, they are not working from a blank sheet. They are drawing on accumulated knowledge: what their organisation has agreed to before, what positions they are prepared to defend, where the market typically lands on a given clause, and what this specific deal is trying to achieve.
None of that knowledge is explicit. But it is present in every decision a skilled lawyer makes. It shapes what they flag, what they let go, and how they frame a negotiation position.
AI models are trained on broad corpora of legal language and can apply that training with considerable accuracy to general tasks. What they cannot do, without the right inputs, is replicate the embedded organisational and market knowledge that makes legal decisions contextually appropriate.
The result is output that is legally coherent but contextually unreliable. Technically defensible but commercially wrong for the deal in front of you.
This is not a flaw in the underlying technology. It is a gap in how the technology is being deployed.
The Layers of Context That Matter
Context in legal work operates at several distinct levels, and a reliable AI system needs access to all of them.
The first is organisational context: what has this team agreed to historically, what standards have they adopted, and where are their non-negotiables?
The second is transactional context: what kind of deal is this, what is the risk profile, and what is the commercial objective? The same clause warrants a different response in a strategic partnership than in a low-value supplier agreement.
The third, and most underappreciated, is market context: what do counterparties actually accept across comparable transactions? Where is resistance likely to be high and where does it dissolve quickly?
Without the third layer, the first two are incomplete. Internal standards exist in relation to a market. Positions are meaningful only when measured against what the other side is likely to accept.
The ‘Just Upload Some Contracts’ Problem
The prevailing advice in legal AI is something like: give the AI context just in time. Upload a few past contracts. Let the model compare and calibrate. For a solo practitioner or a small team working on a contained set of deal types, this is not unreasonable.
But for in-house legal teams at enterprise organisations, it is not enough. And treating it as sufficient is one of the more dangerous assumptions currently circulating in the market.
Enterprise legal teams do not operate as a single coherent unit with a shared, up-to-date view of what they have agreed to and why. They are distributed across business units, geographies, and deal types. Standards are applied inconsistently. Exceptions accumulate without being acknowledged as precedent. The contracts uploaded for ‘context’ reflect a partial and often unrepresentative slice of what the organisation has actually done.
Just-in-time context in this environment does not ground the AI in organisational reality. It grounds it in whatever the individual lawyer happened to surface that day. That is not context. It is anecdote at scale.

The Enterprise Problem Is a System Problem
In-house legal at a large organisation is a system. Decisions made by one lawyer in one business unit interact with decisions made by others across the organisation. Concessions accepted under commercial pressure in one deal quietly reshape what the team treats as normal across dozens of subsequent ones.
An AI system that draws on curated, organisation-wide precedent produces output that can be trusted, audited, and used as a basis for delegation. An AI system that draws on whatever the individual lawyer uploads before starting a review produces output that is only as reliable as that individual’s curation choices — and which cannot be verified or improved systematically over time.
One of those is a tool. The other is infrastructure.
Why Market Data Matters More at Enterprise Scale
AI trained on a team’s own contract history, without access to credible external market data, will absorb the positions that team has historically taken. It will reproduce aggressiveness where the team has been aggressive, and normalise concessions where they have drifted. Without an external reference point, it cannot distinguish a well-calibrated standard from an outlier.
For a solo practitioner, this risk is manageable. The feedback loop is short.
In an enterprise legal team, the same failure mode operates at a completely different scale. Misaligned AI output is not corrected by a single experienced lawyer. It is propagated across dozens of reviewers, hundreds of deals, and multiple business units — often without anyone noticing until the damage is visible in aggregated outcomes.
Credible, clause-level benchmark data breaks this loop. It introduces an external constraint independent of any single organisation’s history, giving both the AI system and the legal team a defensible basis for the positions they are taking.
Context Is Infrastructure, Not a Feature
The dominant framing in legal AI treats context as something to be added on demand. Upload what you need. Prompt carefully. Adjust.
This works for individuals. It does not work for organisations.
For in-house legal teams operating at enterprise scale, context is the infrastructure on which reliable, consistent, defensible AI output depends. That means capturing organisational precedent systematically across the full contract population. It means making transactional parameters available consistently across all reviewers. And it means drawing on market benchmark data that is current, credible, and verifiable.
The enterprise legal teams that build this infrastructure will find that AI starts to produce output they can genuinely rely on.
Those that settle for just-in-time context will get faster individual work — and an aggregated outcome that continues to feel unpredictable.
The gap between those two experiences is not about the AI. It is about what you give it to work with.
—
You can find more information about SimpleDocs here.
—

About the Author:
Electra Japonas is the Chief Product Officer at SimpleDocs where she leads the product vision behind the company’s AI-powered contract tools. She works at the intersection of legal expertise and product design, helping shape how AI can be applied to real-world legal workflows like contract review, clause drafting, and playbook automation.
As the Founder of oneNDA, the global standard for NDAs, Electra brings a deep focus on standardization and usability to her work, pushing for tools that are not just smart, but genuinely adoptable.
Before joining the organisation in 2024 following its acquisition of oneNDA, she was CEO and Founder of legal operations firm TLB, and held senior legal roles at the European Space Agency, Disney, BAT, and EY. She is a dual-qualified Solicitor (England & Wales) and Cyprus Attorney.
—
[ This is a sponsored thought leadership article by SimpleDocs for Artificial Lawyer. ]
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.