
UK-based legal tech group, Litig, has launched an AI Transparency Charter, with those who sign up to its ideas given a kitemark badge to prove their adherence to the ‘quality code’. The move follows earlier debacles over the accuracy of AI tools in the legal world, (see the interview below with CMS’s John Craske).
The Charter comes with a supporting Litig AI Product Transparency Statement, and a Litig AI Use Case Framework, and other materials to help legal tech companies – and law firms that sell the output of their own legal AI tools – to show to the world that they are taking AI adoption and issues around accuracy seriously.
The project followed a number of tests last year that suggested what legal AI tools could do and what buyers assumed they could do were quite far apart. Artificial Lawyer, and others, called for some type of community-led project that could instil faith in what was then still a very nascent part of the legal tech market. Now, of course, most legal tech tools contain some AI element – so, this really affects a huge number of sellers and buyers.
Another aspect that Artificial Lawyer backed was that we should not aim to set specific benchmarks for AI tools, as the ‘state of the art’ is moving so fast. Rather, there should be a ‘compass’ approach, where the goal is ever-improving results, accompanied with a shared awareness and transparency around accuracy and AI performance on legal tasks. (See AL article from 2024, here.)
AL has engaged with the group on this, primarily at the early stages, and it’s great to see the overall end outcome here, further down the road.
So, what does it all say? The Charter’s core commitments include:
- Transparency – clear, open communication on how AI is used in relevant legal services and products.
- Accuracy & Testing – evidence-backed claims on performance, supported by testing data and methods.
- Bias & Ethics – proactive measures to identify, address, and mitigate risks such as bias.
- Use Cases & Limitations – honest disclosure of where AI works well, and where it should not be relied upon.
- Environmental Impact – commitments to track and reduce the carbon and resource footprint of AI.
- Regulation & Standards – alignment with industry standards and compliance with the EU AI Act and other frameworks.
The Charter is accompanied by supporting documents and information:
- Litig AI Product Transparency Statement – a standardised template, inspired by Google’s AI “model cards,” that enables providers of legal AI tools to set out details of their technology, use cases, data, testing methods, and ethical safeguards at the product or service level.
- Litig AI Use Case Frameworks – practical templates for law firms, suppliers, and other organisations to define, document, understand, evaluate, and discuss AI use cases, ranging from high-level scenarios to detailed business case-style descriptions.
- A Glossary of terminology around AI, including testing and benchmarking in the legal industry.
- Information about other Benchmarks, Evaluations, Due Diligence questions and AI Regulation.
Together, these resources create a comprehensive foundation for legal professionals to evaluate, adopt, and govern AI responsibly, they said.
Litig is inviting AI vendors to sign up to the Transparency Charter [here: https://www.litig.org/ai/introduction]. The goal is to establish an industry-wide benchmark for AI trust and accountability, enabling firms to embrace innovation while safeguarding ethical and professional standards.
–
AL asked CMS innovation head, John Craske, who was instrumental in getting this going, some more questions.
– Will those who sign up get a ‘kitemark’ badge?
Yes – we have a process when people sign up to welcome them and send them the kitemark image and the link it should go to. We will encourage people to use it, but ultimately that’s up to them. We will also list the organisations who sign up on the Litig page.

– Will this adherence to the charter be enforced / monitored?
It’s something that the Working Group have considered and will keep under review. At this stage, we don’t have the resource to monitor or audit compliance, so the Charter is a voluntary code of conduct. We will of course respond if we are made aware of anything that goes against the Charter. We are hoping for widespread take-up which would make this a good problem to have!
– On the law firm side, will they also have to / or should sign up to this, as they are selling AI-driven services to clients? (And in some cases sending them work with tools they have built internally.)
Yes, the view is that it applies to anyone providing legal AI products and that can / does include law firms if they are offering legal AI products to their clients.
–
So, there you go. The goal here is not to set specific AI accuracy levels, but rather to say: ‘We are all using these tools, let’s have some shared approaches to how we judge their value and use.’
Naturally, the end goal is ever-increasing usefulness and accuracy from AI tools. Part of that is out of our hands, i.e. it’s down to the likes of OpenAI and Anthropic to improve the base models that legal AI lives upon. There’s not much we can do there except send letters to Sam Altman asking for better results.
But, more broadly, the legal world having a shared, community-based understanding of what AI accuracy is, and having vendors who are committed to transparency, will certainly help us all to work better with what is available at present.
Congrats to John and everyone else on the team who drove this forward.
—
Legal Innovators Conferences in London and New York – November ’25
If you’d like to stay ahead of the legal AI curve then come along to Legal Innovators New York, Nov 19 + 20 and also, Legal Innovators UK – Nov 4 + 5 + 6, where the brightest minds will be sharing their insights on where we are now and where we are heading.
Legal Innovators UK arrives first, with: Law Firm Day on Nov 4th, then Inhouse Day, on the 5th, and then our new Litigation Day on the 6th.


Both events, as always, are organised by the Cosmonauts team!
Please get in contact with them if you’d like to take part.
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.