By Giuliana Rubinia, Senior Manager, LexisNexis UK.
As generative AI accelerates across legal practice, the cyber risks it introduces are evolving even faster. New research from LexisNexis reveals how underprepared many legal teams remain, and what’s at stake when cyber defences fall short.
A new LexisNexis survey of over 700 UK legal professionals investigates how cyber threats are impacting the legal sector. Nearly half (43%) ranked cyber security among their top three challenges for the next year, making it a persistent and urgent concern.
Law firm leaders are especially worried. Phishing and social engineering attacks top their list, with 44% of small firm leaders citing them as their biggest threat. AI-generated attacks, like deepfakes and synthetic email scams, are a growing second, flagged by 24% of respondents.
One small firm partner summed up the sentiment bluntly: ‘Our cybersecurity practices are not matching AI growth. Unless we catch-up, we shall decay and die.’
Despite growing awareness, the reality is that many teams are lagging behind in cyber readiness, which is a particularly worrying concern in a sector where reputation, discretion and trust are everything.
Cyber breaches are now a reputational time bomb
For law firms, a breach isn’t just a tech failure. It’s a reputational crisis waiting to happen. Four in five lawyers said they fear client confidentiality being compromised by a cyber attack. Among General Counsel, that number reaches 100%.
When trust is breached, so is business. Firms face the risk of legal action, SRA investigations, and permanent reputational damage.
Yet many are responding proactively. Over half (52%) of lawyers now use multi-factor authentication (MFA) regularly, and in-house legal teams are leading the charge with 59% reporting consistent use of MFA.
But core security practices may no longer be enough. AI introduces new vulnerabilities, such as model manipulation and inadvertent data exposure, that legacy IT strategies aren’t equipped to handle.

Fears about safety and accuracy
The report found that 61% of lawyers now use AI for work purposes, and 32% plan to start soon. Yet more than three-quarters (77%) worry about inaccurate information generated by AI. A partner at a small firm voiced concern over ‘the accuracy of the data being in doubt’, especially in light of recent court mishaps involving fabricated citations.
Confidentiality is also a major issue, with 47% concerned about data leakage from AI platforms. Only 24% have received training on how to use AI safely, and fewer than 3 in 10 (28%) say their organisation has a clear, usable AI policy.
As Laura Hodgson, AI Lead at Linklaters, noted: ‘One of the ways we seek to reduce risk is through efforts on adoption such as training, communication and using vocal champions to explain the benefits and address any concerns.’
Third-party risk is the hidden cyber security gap
As firms adopt AI tools, many are relying on third-party providers, yet only 6% of lawyers listed external systems as a major cyber threat.
This could be a dangerous oversight. External platforms, especially cloud-based AI services, are a known vulnerability. Choosing secure, transparent vendors is no longer a matter of preference; it’s a business-critical priority.
LexisNexis advises firms to select technology partners that combine deep legal domain knowledge with robust security frameworks. As AI reshapes how legal teams work, a secure foundation is essential to preserving client confidence and avoiding compliance failures.
Investing in trust is no longer optional
As one senior IT manager from a small law firm put it: ‘We are dynamically and strategically adapting to the evolving AI landscape to safeguard the accessibility, confidentiality, and integrity of our client services.’
That means:
- Adopting multi-factor authentication across systems
- Running regular AI-specific risk assessments
- Training staff on AI usage and risk management
- Choosing vendors with a proven security track record.
Ultimately, securing trust in the AI age means embedding compliance and resilience into every part of your tech stack and working only with partners who do the same.
The LexisNexis report is a wake-up call: AI has changed the threat landscape. The firms that succeed will be those who act decisively, build secure partnerships, and train their people to use AI responsibly.
Read the full report here.

—
[ This is a sponsored thought leadership article for Artificial Lawyer by LexisNexis. ]
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.