The UK’s House of Lords Select Committee on Artificial Intelligence, has finally published its major report into the impact, regulation and use of AI, called ‘AI in the UK: ready, willing and able?‘ after taking evidence from multiple experts across many sectors, including from the legal industry.
For those who have been following the debate closely it will come as no surprise that the report understandably sticks to the main themes of jobs and ethics. Although ‘the law’ appears many times as a subject related to liability, transparency and ethics, there is little about the actual business of law and AI, or the New Wave of legal technology that is driven by AI tech such as NLP and machine learning. Although, again, there are references to the need for transparency in any algorithms that may help make decisions related to a defendant in a court case.
The Lords committee also have asked the UK’s Law Commission to review whether existing laws on liability cover errors created by an AI system.
‘There is no consensus regarding the adequacy of existing legislation should AI systems malfunction, underperform or otherwise make erroneous decisions which cause harm. We ask the Law Commission to provide clarity,’ the report says.
There are is also a significant focus on the need for AI decision-making to be regulated and safe.
This is all very important, but the reality is that 99% of AI systems used in legal work at present come nowhere near making decisions on issues such as guilt and are mainly focused on textual review for due diligence, compliance and risk spotting. Or, they are used for litigation prediction, research or expert systems.
In doc review work, all an AI system is doing is highlighting what appears to be a clause worth reviewing. It does not make the final decision, that remains with the lawyer. Hence, although the focus on ethics is useful – and important – Artificial Lawyer can’t help feeling we’re getting a bit carried away. Also, some of infamous cases from the US of using algorithms to decide whether a defendant should be allowed to return home ahead of their court date were not necessarily using AI systems in the first place and perhaps were no more or less biased that the judging criteria already being used in some courts.
In any case, this is what the report said:
One of the recommendations is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence
Other conclusions from the report include:
- Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
- Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
- The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
- The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
- Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
- At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
- The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
- The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK.
The Chairman of the Committee, Lord Clement-Jones, (who is also a partner at global law firm DLA Piper), said: ‘The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.’
‘The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.’
‘AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.’
‘We want to make sure that this country remains a cutting-edge place to research and develop this exciting technology. However, start-ups can struggle to scale up on their own. Our recommendations for a growth fund for SMEs and changes to the immigration system will help to do this.’
‘We’ve asked whether the UK is ready willing and able to take advantage of AI. With our recommendations, it will be,’ he concluded.