Thomson Reuters, via its Labs division, has created a Trust in AI Alliance that includes senior engineering and product leaders from Anthropic, AWS, Google Cloud and OpenAI, the global company said. The Alliance will consider the needs of lawyers, but also cover other professional use cases.
The shared mission of the group is ‘to advance the development of trustworthy, agentic AI systems’, they explained.
Sounds good, what will they do?
TR said that ‘participants will share insights, identify common challenges, and help shape shared approaches to building reliable, accountable AI systems’.
There will also be ‘a focus on engineering trust directly into AI architectures’, they added.
Joel Hron, Chief Technology Officer at Thomson Reuters, commented: ‘As AI systems become more agentic, building trust in how agents reason, act, and deliver outcomes is essential.
‘The Trust in AI Alliance brings together the builders at the forefront of this work to align on principles and technical pathways that ensure AI serves people and institutions responsibly, and at pace.’
Is this a big deal? Well, it all depends on whether the message from the legal and wider market about accuracy and confidence in AI and agentic results gets filtered through the members of the Alliance to the top bosses of these LLM makers. That said, companies such as OpenAI are pushing hard to increase the accuracy of results already, but a regular reminder of this need from the legal world can only help.
Moreover, the use of agents could be a godsend, or the opposite… if they compound AI errors. Hence perhaps the additional need to focus on this aspect by TR right now as agents become part of legal life.
More broadly, it’s a publicly visible move by a company with a major legal research division that shows that it’s taking trust in AI and agentic outputs very seriously – which makes sense given the constant headlines about lawyers inserting made-up citations into their work. While – as noted – agentic systems that ‘get the wrong end of the stick’ and then multiply those errors, could be much worse than just a basic, one-off hallucination.
Overall, a good and useful move.
For more info, check out this blog by TR.
—
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.