
By Dylan Brown, Senior Manager, LexisNexis UK.
A new LexisNexis survey of more than 700 UK lawyers shows a legal market racing ahead with AI adoption but still struggling with integration. While 61% of lawyers now use generative AI at work, up from 46% in January 2025, only 17% report that AI is embedded in their firm’s strategy and operations. The gap is not about technology being unavailable. It is about how firms implement, train, and trust it.
Legal AI versus general AI: a question of trust
The survey highlights a critical split in adoption patterns. Just over half of lawyers (51%) use purpose-built legal AI tools such as Lexis+ AI, while the rest (49%) rely solely on general-purpose systems like ChatGPT or other large language models. Medium-sized firms were the most likely (70%) to choose specialist tools.
Why? Because the difference lies in context and provenance. As Shoosmiths partner Tony Randle put it: ‘A lawyer would never rely on Google to answer a legal question, so they should not rely on general AI platforms to take on tasks that require legal knowledge.’
Specialist platforms are trained on structured legal datasets, grounded in case law, legislation, and practical guidance. They also provide data lineage and audit trails, crucial for risk management. By contrast, general AI may accelerate drafting or admin but offers little in terms of compliance, transparency, or defensibility.
Integration, not experimentation, is the challenge
Four in ten lawyers describe their organisations as ‘experimenting but slow-moving’ when it comes to AI. For legal tech professionals, this highlights the real friction point: integration into existing systems and workflows.
Eversheds Sutherland’s Bhavisa Patel noted: ‘You can have the best solution, but if people don’t know what it is, how to use it, or how it will help them, the benefits will always be limited.’
This is not a hardware or software limitation. It is about enterprise adoption design:
- embedding AI outputs into DMS, PMS and KM systems
- designing role-based enablement and training
- setting clear governance policies on when AI can and cannot be used
- creating user trust through explainability and transparency
Without these layers, AI risks becoming a standalone experiment rather than a workflow accelerator.
Measuring ROI: from efficiency to client value
The report shows that more than half of private practice lawyers using AI (56%) reinvest saved time into billable work, while 53% say it has improved work-life balance. But tracking efficiency gains is only part of the story.
Firms are beginning to shift metrics towards time saved, accuracy improvement, and client outcomes. In fact, 54% of private practice firms using AI now measure success based on time saved, up from 46% in January. This reflects a growing recognition that ROI in legal AI is tied to workflow redesign, not just output speed.
Clients are already adjusting expectations. As one law firm leader observed: ‘Given lawyers’ reliance on billing by the hour, clients will expect lower fees because AI delivers quicker results.’ This will accelerate the shift towards fixed fees, retainers, and bundled AI-and-lawyer services.
Security, provenance and hallucinations
Another key finding: 77% of lawyers remain concerned about AI producing inaccurate outputs. For technologists, this reinforces the need for domain-specific training data and strong provenance controls.
As Sarah Barnard, Director of AI Delivery at Linklaters, noted, some firms are waiting for clearer ROI before committing. But at the technical level, the question is about trust infrastructure:
- grounding AI outputs in authoritative sources
- building safeguards against hallucinations
- ensuring compliance with confidentiality obligations
- enabling secure integration with firm-owned knowledge bases
Legal AI platforms that can demonstrate not only accuracy but also auditability are likely to see faster adoption in risk-averse environments.
Talent and adoption: the hidden tech driver
The survey revealed nearly one in five lawyers (18% private practice, 19% in-house) would consider leaving their organisation if it failed to invest in AI. For technologists, this is not just an HR issue but a technology adoption issue.
If AI tools are clunky, poorly integrated, or unsupported, lawyers will disengage. If they are intuitive, secure, and embedded into daily workflows, lawyers will adopt them enthusiastically.
This is why UX design, data governance, and workflow integration are now as central to legal AI adoption as the underlying LLM itself. As Slaughter and May’s Michelle Holford put it: ‘If you can be open minded and have a willingness to experiment, it creates a culture where people are keen to try new technologies, not having them forced upon them.’
AI adoption depends on tech architecture as much as culture
The findings show that adoption is no longer the barrier, integration is. The culture clash is really a technology adoption clash: between firms that experiment without embedding and firms that re-engineer workflows around AI.
As Freshfields’ Gerrit Beckhaus concluded: ‘AI demands clear strategic direction and communication from the top, tying it to client-value outcomes and measurable impact.’
For vendors, consultants and innovation leaders, the challenge is to design AI that works inside legal infrastructure: secure, explainable, embedded, and measurable. For law firms, the priority is moving beyond pilots to systemic integration.
The future of legal AI will be defined not by how fast lawyers adopt tools, but by how effectively those tools are wired into the operational backbone of the firm.
Read the full report here.

Discover how the LexisNexis ecosystem is powering secure, trusted and integrated legal AI: The Power of the LexisNexis Ecosystem.
—
[ This is a sponsored thought leadership article by LexisNexis for Artificial Lawyer. ]
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.