New Rules For AI-Related Disputes Raise Extra Questions

JAMS, the global alternative dispute resolution (ADR) group, has published new rules governing disputes involving AI – but, they raise several additional questions. For starters, there’s the definition of AI: ‘A machine-based system capable of completing tasks that would otherwise require cognition.’

Then there’s the key question of how AI is used in the dispute resolution itself. For example, is it going to be allowed for two parties in an ADR setting to use an LLM to help develop an agreement between them? Plus, can AI systems be used as a form of reliable expert witness in certain areas?

But first, the main changes. The JAMS AI Disputes Clause and Rules now specifically refine and clarify procedures for cases involving AI systems, which includes ‘proper filing, service of the request for arbitration, commencement of the arbitration and service of documents throughout the arbitration’. Although, this only occupies a small part of the overall rules text, which remains fairly boilerplate by the look of it.

There are some juicy bits though, for example, one section mentions: ‘The production and inspection of any AI systems or related materials, including, but not limited to, hardware, software, models and training data…..’

Now, training data, that really is an explosive area for any AI company to be open about, especially at the moment, e.g. the New York Times’ dispute with OpenAI. Moreover, how many companies will want to be totally transparent about their models’ inner workings, which they may see as a trade secret, e.g. a local government vs a self-driving car company?

There’s also a section on ‘expert(s) providing opinions on AI systems, including any experts inspecting any AI systems’, which raises some interesting questions as well.

How many people out there will be qualified to ‘inspect’ someone else’s GenAI system at the code level? How will that party feel about the expert having a poke around looking for clues and forming an opinion on whether that GenAI company’s software, or by its execution of an order or prompt, is transgressing the law in some way?

Kimberly Taylor, JAMS president, commented: ‘This pioneering initiative marks JAMS as the first in the ADR industry to establish a comprehensive legal framework tailored to the complexities of AI. We encourage all stakeholders to embrace these new rules, as they are essential in navigating the complicated landscape of AI disputes with clarity and foresight, to provide resolutions that are both effective and attuned to the latest technological developments.’

Beyond that, as noted above, there are plenty of other questions.

The definition of AI is: ‘A machine-based system capable of completing tasks that would otherwise require cognition.’ I.e. AI is that which doesn’t require cognition to do things.

But, what is ‘cognition’? The Cambridge Dictionary states that this means: ‘The use of conscious mental processes.’ Clearly, an AI is not conscious. But then, neither is a Hoover, or a jet plane, and they both complete tasks and are machines.

The key word then becomes ‘otherwise’, i.e. it suggests AI is doing tasks that normally might require human, mental processes. So, is this just a different way of saying: a tool that does things like a human mind does them, but isn’t one?

Moving on, for Artificial Lawyer the key thing is not so much how to define AI in a dispute, but rather how AI may help resolve disputes. That is not covered in JAMS’s new rules, yet.

AI is already there in many aspects of dispute resolution, from eDiscovery, to automatically building timelines, to summarising witness depositions. But, what about using AI, and especially an LLM, to actually help resolve the dispute itself?

What about both parties using an LLM-based system that has been system prompted to guide them to a resolution through a series of steps? Would that be acceptable? Would that be enforceable?

What about the parties providing evidence, or expert testimony, that is for the most part derived from an LLM’s analysis of a matter (if it were text-based)?

One could say we are already moving there in some ways with using traditional TAR and now LLM systems to review bulk document collections for eDiscovery, which helps us decide which documents relate to a case or not, and hence steer the dispute. Clearly, there are a lot of humans in the loop here, but on a philosophical level we are asking ‘a machine’ to decide things.

But, one could take this a lot further, perhaps to take all the written evidence in a case and then drive the LLM to take a view – albeit that would mean a huge amount of information for an LLM to digest. Could that view be admitted to the hearing? Would it carry any weight? Could the other party then have ways to dispute that LLM-derived ‘expert opinion’. Naturally, this means a lot more than jamming a ton of docs through ChatGPT, you’d need plenty of additional structuring and a mass of complex system prompts to get anywhere close to a sensible system – but, it doesn’t seem to be an impossible scenario in the coming years.

Much more will come into view as the tech develops. Whatever happens it’s good that JAMS is starting to grapple with questions related to AI – there’s going to be a lot of them.

The new rules, which come into effect immediately, can be found here.

Legal Innovators California conference, June 4 + 5 – San Francisco

The new wave of legal AI and how it will change our sector will be central to the Legal Innovators California conference, Jun 4 + 5. 

The event will take place in San Francisco with Day One focused on law firms, and Day Two on inhouse and legal ops. We have many great speakers attending the event, along with a group of pioneering legal tech companies and service providers – you can see some more about our speakers here.

For ticket information, please see here. Don’t miss out on what will be a great event in the heart of America’s tech world.