Litig Unveils AI Transparency Charter to Promote Responsible AI Adoption in the Legal Sector
Today, 22nd October 2025, Legal IT Innovators Group (Litig) announced the launch of its AI Transparency Charter and supporting Litig AI Product Transparency Statement, a Litig AI Use Case Framework and other materials.
As law firms and legal technology providers increasingly explore generative AI, it is critical that we do that in a safe, ethical, and transparent way that promotes trust. The Charter serves as a best practice benchmark for anyone providing generative AI based tools and products to the legal sector.
The Charter sets out clear commitments for organisations, to help ensure that products and services that use generative AI for legal use cases are developed in a safe, ethical, and transparent way. Crucially, the Charter does not require signatories to disclose commercially or legally sensitive information, but sets out key commitments and, through the Legal AI Product Transparency Statement template, provides a structured way to explain how a tool or product that includes generative AI is being developed, tested, maintained and the use cases for which it should be used.
The Charter forms part of the Litig AI Benchmark Initiative, a landmark initiative that started in July 2024 with a broad community across the legal industry and is designed to foster trust, accountability, and responsible use and adoption of AI across the legal industry. Developed and supported by the initiative’s working group the Charter and supporting documents already have broad approval across the legal industry.
The Charter’s core commitments include:
- Transparency – clear, open communication on how AI is used in relevant legal services and products.
- Accuracy & Testing – evidence-backed claims on performance, supported by testing data and methods.
- Bias & Ethics – proactive measures to identify, address, and mitigate risks such as bias.
- Use Cases & Limitations – honest disclosure of where AI works well, and where it should not be relied upon.
- Environmental Impact – commitments to track and reduce the carbon and resource footprint of AI.
- Regulation & Standards – alignment with industry standards and compliance with the EU AI Act and other frameworks.
The Charter is accompanied by supporting documents and information:
- Litig AI Product Transparency Statement – a standardised template, inspired by Google’s AI “model cards,” that enables providers of legal AI tools to set out details of their technology, use cases, data, testing methods, and ethical safeguards at the product or service level.
- Litig AI Use Case Frameworks – practical templates for law firms, suppliers and other organisations to define, document, understand, evaluate and discuss AI use cases, ranging from high-level scenarios to detailed business case-style descriptions.
- A Glossary of terminology around AI, including testing and benchmarking in the legal industry.
- Information about other Benchmarks, Evaluations, Due Diligence questions and AI Regulation.
Together, these resources create a comprehensive foundation for legal professionals to evaluate, adopt, and govern AI responsibly.
A call to action for the legal industry
Litig is inviting AI vendors to sign up to the Transparency Charter here. The goal is to establish an industry-wide benchmark for AI trust and accountability, enabling firms to embrace innovation while safeguarding ethical and professional standards.
Quote from Litig leadership
“To drive sustainable and responsible adoption of AI, the legal industry must have confidence and trust in the tools they are using. The Litig AI Transparency Charter and supporting tools provide practical, workable building blocks that firms and AI providers can use to build trust and confidence and ensure that AI is used responsibly, without compromising standards expected by law firms, clients and society.” David Wood, Litig Director.
And from iManage:
“At iManage, we believe AI Confidence isn’t something you buy – it’s something you build. The LITIG Transparency Charter embodies that principle by establishing practical standards for developing AI that is transparent, tested, and trusted. We’re proud to have served on the committee that shaped this initiative and to collaborate with peers across the industry in driving responsible, confident AI adoption,” said Jenny Hotchin, Legal Practice Lead, iManage.
Further details, along with the full AI Transparency Charter, AI Product Transparency Statement, and Use Case Frameworks, are available at: https://www.litig.org/ai/introduction
At the London Law Expo earlier this month, John Craske, speaking in his capacity as Director of LITIG, joined Shawn Curran, CEO of Jylo, for an engaging conversation and Q&A session that provided an exclusive first look at the AI Transparency Charter.
If you missed it, don’t worry — you can listen to the full conversation now click here – https://bit.ly/3Weryo1
