The Law Society of England & Wales has launched a groundbreaking public policy commission on the use of algorithms in the justice system, spearheaded by its incoming President, Christina Blacklaws, who takes up her role in July.
The commission, which will take evidence across the year from lawyers, tech experts, public policy specialists and academics, will then publish a report in February 2019 with a view to making recommendations on what, if any, changes may need to be made to the regulation of such technology.
The move comes as UK politicians, industry and civil society experts are joining with colleagues from across the globe to create the AI Global Governance Commission, a universal platform which will develop an international AI framework and policies that will be relevant across different sectors of the economy – and which presumably will include AI’s use in the justice sector, (see second story below).
The Law Society initiative represents one of Blacklaws’ key themes for the year of her Presidency, which places legal technology and its better understanding, and greater use, by the legal profession, front and centre as an issue.
In terms of the commission, it has come about due to the need to ‘catch up’ with where legal technology already has got to, with a particular need to lay down some regulatory groundwork to cover areas such as algorithm-based decision making in the wider justice system, which includes areas such as policing.
While it’s fair to say that most commercial uses of AI and algorithm-based systems in the law don’t yet make legal judgments and focus on areas such as document review, there is concern that some systems, such as those used by the police for facial recognition of potential suspects, or that make recommendations on whether someone should be released on bail before a court hearing, could create a risk to people’s human rights and may impact areas such as transparency and fairness in the justice system.
Blacklaws stressed that overall the aim of the commission is to collect evidence from all corners of the profession and to make sure that all views are heard.
In this regard it’s very important that representatives of the legal AI sector get involved to make sure they are heard too and that if any regulation does come from this, that it enables the sector, rather than holds it back.
Or, as Artificial Lawyer likes to put it: legal AI and algorithm-based systems may need a ‘Highway Code’ that although setting some limits generally helps the sector to grow with the profession’s and public’s trust and full support.
What we must avoid is regulation that accidentally hems in this new technology in ways that could harm its growth and prevent its benefits reaching the legal market, and those whom it serves in wider society.
In fact, all powerful new technologies go through a phase of ethical and regulatory introduction, for example as seen with the rise of the automobile in the early 1900s and the move from zero regulation to a wave of rules that helped to reduce accidents and protect pedestrians. Then came rules around all of the best auto insurance and regulation over the need to upkeep these machines.
In short, we went from no rules to an ecosystem of laws and regulations in a couple of decades. But, the end result was that more people used cars, not less. In fact, we could say that the ethical groundwork enabled the industry to expand and for people to have confidence in it.
This is what we need to aim for in terms of legal AI and algorithmic decision-making, i.e. that we facilitate the safe growth of this sector, not kill it off by placing regulatory rules so strict upon it that it becomes unusable. So, this is a very welcome step by the Law Society, as it is a necessary step in legal AI’s wider adoption.
Artificial Lawyer looks forward to getting involved in this debate and inviting readers to get involved too in this area, which surely will evolve over this year and likely many years ahead as the $700 billion global legal market, and the huge ‘justice hinterland’ around the world slowly starts to make use of this technology, which seems all but inevitable.
Below is an account of some of the key points made at the commission launch and the discussion yesterday of the panel. It featured (as pictured above, left to right):
Sylvie Delacroix, Professor of Law and Ethics, Birmingham School of Law (SD)
Christina Blacklaws, incoming President of the Law Society (CB)
Sofia Olhede, Director, Centre for Data Science, UCL (SO)
Chair, Sophia Adams Bhatti, Director of Legal and Regulatory Policy, the Law Society
CB – Technology is one of the biggest challenges today. And it sparks polarised debate. And nowhere is this more intense than in the areas of human rights and the justice sector. [The full speech can be found here.]
Justice is a core pillar of the operation of a democratic nation. It needs to be effective and to retain the trust of society.
[However] technology is developing at a phenomenal rate. We need to make sure we understand the choices we make and have a public discussion to give a voice to all so there is an informed consensus [about the use of algorithms in the justice sector].
CB – An algorithm is just a set of instructions to complete a task.
There is nothing wrong in algos in and of themselves. They do what humans do – they take data and process it and give an answer.
Sometimes we make the decisions consciously, sometimes not. And algos can do this more efficiently than us.
The trouble is if we don’t know what data it’s based its decision on, so we are right to ask questions.
This is not about right and wrong, but it’s about asking how to use algos for good and to avoid creating new problems.
[We need to ask] what is appropriate in terms of oversight, how do we check the data is free from bias, how do we make sure the system is trusted, and protect the rights of those whose data is used.
We also need to be able to review and appeal those decisions [made by an algo].
SO – This is about human subject data [and] the main issue is bias.
There can be many sources of bias – e.g. due to a lack of design. (The example was given that if you used Twitter activity to judge something’s impact then all you did was pick on the demographic groups that tweeted the most, rather than collecting data on those who may actually be affected by an event.)
This also needs to be transparent and fair. (And the point was made that you can’t be ‘mathematically fair to all at all times’).
There are also different ways of approaching prediction of outcomes. You can have explanatory frameworks, e.g. if you have X level of education it predicts Y level of future salary. But, not all predictive frameworks can be explained very well. (And it’s this explainability that needs to be explored.)
SD – Algos can help with justice and it can increase access to justice. No one would argue with that.
But some tools are too blunt and ignore subtle features of the issues.
Algo tools will transform our justice system, I have no doubt about that and this will be a good thing. But this will also change our values.
What if algos make us less and less able to adapt? Less able to deal with uncertainty.
(The key point made here was that if we have programmed systems that we accept can make decisions for us then we may just become ‘lazy’ and accept them as they are and not seek to update them and keep improving them. This then creates a new way of looking at society where our ethical and moral judgments become fossilised, one might say, and we stop adapting to a changing society. And that in SD’s view was a risk.)
And there was much, much more and some interesting questions from the floor about how are we defining justice in this initiative, how does GDPR impact all of this, what concrete outcomes will be delivered in terms of changing laws on things like privacy and human rights, and how does this overlap with other areas of existing legislation?
Artificial Lawyer also asked: can we look at this from the other direction, i.e. that not using AI and algorithmic tools could actually increase risk in the justice sector? There seemed to be support for this view from the panel, that indeed there may be cases where decisions taken by humans will be improved by a machine’s ability to crunch data and provide insights, and that not using an AI-driven analysis could mean humans made worse decisions.
OK, that’s all there’s time for today. Artificial Lawyer will be writing up some more thoughts next week. As mentioned above, it’s vital that legal tech companies get involved in this, as the outcome could impact how they may operate in the future.
Moreover, this commission could become a model for other legal markets around the world, and its decisions could become a benchmark. All the more reason for everyone to get involved.
If you would like to provide evidence, or just ask a question about the commission, then please contact:
Olivier Roth, Domestic Human Rights and Constitutional Law Policy Advisor, who is the lead project manager for the commission.
Please use this email address: firstname.lastname@example.org
Meanwhile, in other UK news related to the regulation of AI: check out this one, which also presumably will cover the use of AI in the legal sector.
UK Parliament announcement on AI Regulation
London, 15 June 2018 – UK politicians, industry and civil society are joining with colleagues from across the globe to create the AI Global Governance Commission, a universal platform which will develop international AI framework and policy.
The AI Global Governance Commission was recommended by the All Party Parliamentary Group on AI (APPG AI) and launched by Big Innovation Centre, in response to the need for global policy on AI, as well as the recommendations of the recent Lords Select Committee AI report, in which MPs and peers called for a cross-sector code of conduct and standards for the industry.
At a meeting of the All Party Parliamentary Group on AI in the House of Commons this week, industry experts agreed that the lack of an international ethical framework is currently preventing entrepreneurs from developing and bringing to market products, services and systems with confidence, and that businesses in the UK are hesitant to embrace new technologies which they know could be transformative.
Birgitte Andersen, CEO of the Big Innovation Centre, said: “The AI Global Governance Commission will revolutionise the way we debate and legislate on AI. It will accelerate global policy development and create a shared vision via an international ‘open policy’ making approach across the public and private sector, civil society and academia.”
Gosia Loj, AI Global Governance Commission Lead, added: “We will prototype necessary global public policies that will enable society to co-exist with AI and other new technologies internationally. We will also prototype a new agile governance structure fit for the future.”
In an open letter to the Cabinet, the Chairs of the All Party Parliamentary Group on AI, Lord Clement-Jones CBE and Stephen Metcalfe MP, have called on the UK Government to join the AI Global Governance Commission and help shape the global agenda.
The initiative has cross-party support in Parliament; Vice-chair of the Conservative Party, Kemi Badenoch MP, Darren Jones MP (Labour), Baroness Kramer (Liberal Democrat) and cross-bench peer Martha Lane Fox, among others.
The Commission is also supported by a range of industry and academic institutions, including Lloyds Banking Group, University of Oxford, IBM and KPMG as well as international stakeholders, as among them The Bugshan Group from Saudi Arabia and the Metropolitan City of Gwangju in South Korea.
“The UK has a longstanding history of robust governance standards,” said Lord Clement-Jones CBE. “It makes absolute sense that we create an ethical AI-enabled world.”