‘You Will Be Legally Liable For AI Outcomes’ – UK Government

A new policy paper by the UK Government has set out key aims for AI regulation that could have far-reaching consequences if made into law. Many of the model policies cover what you would expect, but nestled among them is one particular demand that could both generate work for lawyers and cause them headaches, namely that ‘accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person – whether corporate or natural’.

In this regard it’s a bit like having a Chief Data Protection Officer, although in this case the ‘Chief AI Officer’ will be in a far trickier position, given that they won’t always be able to know what is going on with the AI tools the firm uses – at least they won’t unless they really get into the details of every single use case for the clients – and there could be quite a few of them happening at any particular time in a major firm.

Now, you might hope that commercial law firms using NLP tools will find a way to escape this, but the policy wording is very open to interpretation. If a lawyer uses an NLP tool to review documents and then says a deal is fine to proceed, that law firm presumably will need to have someone signing off on that – or at least perhaps have in place some kind of blanket ‘tick box’ agreement with the client that they’re happy to proceed and that someone in the law firm is named, or the firm as a whole is named, as being liable for any cock-ups the AI software may cause.

I.e. it’s splitting the challenge. At present, lawyers have professional indemnity insurance that covers the whole of their working activity, they don’t have to take out special insurance for the outcomes made by AI systems. But presumably if the policy for explicit liability for ‘AI outcomes’ became law, something may have to change there.

On a more positive note, this – if made into law – would create a surge of new work for lawyers as they travelled around the UK helping companies to stay compliant if they used any kind of ‘AI’ tool.

Of course, Google is in effect ‘an AI tool’, as it is a machine learning system that ‘improves’ as it ‘learns’ from the data fed into it, so how does everyone deal with that, as every business in the country probably uses Google in some way that might shape outcomes for their customers?

But, if we are pragmatic, what the new policy is most likely aimed at is companies that provide a distinct service that operates with a machine learning base and where it’s felt that the business may not be wholly on top of how it works or what impacts it may have. For example, a company filtering job candidates with AI software, which then turns out to be really badly biased against certain people because it relies on historical employment data that has not been checked for balance. I.e. they are just reinforcing old hiring patterns.

But, what about using predictive AI tools to help make a decision about taking a client’s law suit? Will the law firm have to designate someone who is legally responsible for using that piece of tech, even if human lawyers make the final decision and also provide significant input – which of course they would do as the tech can only ever give a partial response? And does the same go for transactional review? And eDisclosure tools?

Ultimately it means that as liability rises, law firms and their clients will increasingly have to trust the tools they use – even more so than today. That then opens up challenges and opportunities for the software companies, in legal tech and other sectors.

Any road, it’s still early days. Here’s what Rt Hon Nadine Dorries MP, Secretary of State for Digital, Culture, Media and Sport, said: ‘Across the world, AI is unlocking enormous opportunities, and the UK is at the forefront of these developments. As Secretary of State for the Department of Digital, Culture, Media and Sport, I want the UK to be the best place in the world to found and grow an AI business and to strengthen the UK’s position so we translate AI’s tremendous potential into growth and societal benefits across the UK.

Our regulatory approach will be a key tool in reaching this ambition. A regulatory framework that is proportionate, light-touch and forward-looking is essential to keep pace with the speed of developments in these technologies. Such an approach will drive innovation by offering businesses the clarity and confidence they need to grow while making sure we boost public trust.

‘Getting this right is necessary for a thriving AI ecosystem and will be a source of international competitive advantage. We will continue to advocate internationally for our vision for a pro-innovation approach to AI regulation recognising that both the opportunities and challenges presented by AI are fundamentally global in nature.

‘I am therefore pleased to publish this paper which sets out our emerging thinking on our approach to regulating AI. We welcome views on our proposals from across business, civil society, academia and beyond ahead of publishing a White Paper later in the year.’

And here are the potential policies related to the point above about liability for AI ourcomes:

Define legal persons’ responsibility for AI governance

AI systems can operate with a high level of autonomy, making decisions about how to achieve a certain goal or outcome in a way which has not been explicitly programmed or even foreseen – which can raise secondary issues and externalities. This is ultimately what makes them intelligent systems.

Therefore, accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person – whether corporate or natural.

Clarify routes to redress or contestability

AI systems can be used in ways which may result in a material impact on people’s lives, or in situations where people would normally expect the reasoning behind an outcome to be set out clearly in a way that they can understand and contest – for example, when their existing rights have been affected. Using AI can increase speed, capacity and access to services, as well as improve the quality of outcomes. However it can also introduce risks, for example that the relevant training data reproduces biases or other quality concerns into an outcome.

Subject to considerations of context and proportionality, the use of AI should not remove an affected individual or group’s ability to contest an outcome. We would therefore expect regulators to implement proportionate measures to ensure the contestability of the outcome of the use of AI in relevant regulated situations.’

You can find the full report here.