By Minesh Tanna, Simmons & Simmons and Jacob Turner, Fountain Court Chambers.
AI adoption is increasing, particularly since the start of the Covid pandemic. In a recent survey, Gartner found that 33% of technology and service provider organisations will spend more than US$1 million on AI by 2023.
At the same time, AI technologies are becoming increasingly complex. A major issue with AI is its inherent lack of transparency. Whilst humans are typically heavily involved in the development of an AI system – from choosing the right data to train the AI system to testing the AI system before deployment – humans are often unable to explain precisely how the AI system operates.
For example, the developer of AI software used to drive an autonomous vehicle may have full confidence in the accuracy of the vehicle’s decision-making, but it will not be able to explain how the vehicle makes each and every one of those decisions due to the complexity of the decision-making process.
This inherent lack of transparency is problematic. For example, if an employer uses AI to assess candidates for a job vacancy and the AI system rejects a particular candidate but the employer cannot explain why, then this opens the door for claims of discrimination and other potentially harmful effects. If AI cannot be trusted then companies and consumers may be less willing to use the technology.
As a result of this issue, there is an increasing focus on AI explainability and transparency. Regulators and policy groups around the world have emphasised the importance of explainable / transparent AI and this topic forms a cornerstone of the dialogue and frameworks on “ethical” or “responsible” AI development and use.
There is also a legal obligation to make AI explainable. Articles 13-15 of the GDPR have been interpreted by courts and regulators as providing a right to explanation for individuals, where their personal data has been used by an organisation in solely automated decision-making (which includes AI) which has a legal or similarly significant effect on that individual. Decisions subject to the right to an explanation might include the grant of loans, university or school admissions, fraud detection, job offers, benefits decisions, and potentially even some advertising, where individuals are profiled using AI.
In these cases, the relevant organisation must provide “meaningful information on the logic involved” in the automated decision-making process, which amounts to explaining how an AI system has reached the relevant decision.
This obligation has stayed beneath the radar somewhat since the GDPR came into force in 2018, but there have been recent challenges (in 2020 and 2021) against organisations for failing to use algorithmic / AI technology with sufficient transparency:
- In 2020, various former drivers, supported by the App Drivers & Couriers Union and Worker Info Exchange, brought an action against ride-hailing companies Uber and Ola Cabs in the Amsterdam Court alleging (among other things) that drivers had been accused of fraud and had their contracts terminated based on an algorithmic decisions, which had been taken without sufficient transparency. The Amsterdam Court ordered Ola Cabs to provide information about its automated decision-making, including disclosing “information that makes the choices made, data used and assumptions on the basis of which the automated decision is taken transparent and verifiable.”
- In July 2021, the Italian data protection regulator fined the operator of an online food delivery platform, Foodinho, €2.6m for, amongst other things, failing to provide transparency in respect of the algorithms it used to manage delivery riders.
- In August 2021, the same Italian data protection authority also fined Deliveroo €2.5m for a lack of transparency in its algorithmic management system. Absent such transparency, the regulator considered there could be discriminatory effects against protected groups.
We expect that, across various sectors and domains, there will be further challenges of this sort against organisations for a failure to use automated decision-making (including AI) with sufficient transparency. Focus to date has been on the Gig Economy, but many other companies are using AI to make important decisions, often with inadequate attention to transparency.
The GDPR will not remain the only source for legal obligations on AI transparency. China recently released a set of draft algorithmic recommendation regulations, which call for transparency of decisions made online using algorithms. These are expected to become law in late 2021 or early 2022. The European Commission has released a draft AI Regulation, which will impose very stringent obligations on many applications of AI. This includes enhanced transparency obligations, which build upon those in the GDPR.
It is sensible therefore for organisations using AI (particularly that which involves the processing of personal data) to address this risk and to take steps to increase the transparency of their AI systems. One way of doing this is to develop and publish an AI Explainability Statement, a document which explains how and why AI used, the design and functioning of AI systems, and who is responsible for each stage of their lifecycle. Providing such information is one way of complying with the UK’s Information Commissioner’s Office (ICO) and Turing Institute’s Guidance on AI explainability under the GDPR.
The healthcare symptom-checking company, Healthily, recently published the world’s first AI Explainability Statement to have received input from the ICO.
In recent years, organisations have become increasingly mindful of data privacy risks, as well as Environmental, Social and Governance (ESG) considerations. As a consequence, both data privacy and ESG are now well-established parts of global compliance programmes, as well as processes such as due diligence.
Both data privacy and ESG compliance rely on a combination of internal record keeping, as well as public disclosure of how an organisation is complying with its responsibilities. In light of the increasing regulatory and legal scrutiny described above, ethical AI governance should now be approached in the same way. This starts with transparency and explainability.
—
About the Authors:
Minesh Tanna is a Solicitor Advocate and Global AI Lead at Simmons & Simmons; and Jacob Turner is a Barrister at Fountain Court Chambers and author of ‘Robot Rules: Regulating Artificial Intelligence‘.