AI’s Big Ethical Questions – Charlotte Walker-Osborn, Eversheds Sutherland

AI’s Big Ethical Questions

By Charlotte Walker-Osborn, International Head of Technology Sector, Eversheds Sutherland

What is your view of the risks of facial recognition technology and its impact on civil liberties?

There is a clear divergence of opinion in this area. The largest tensions, perhaps, sit between those who consider that this is a necessary exception to privacy, at least for law enforcement versus those who consider the technology sits far too far on the side of invasiveness, even where it will improve efficiencies and accuracies around law enforcement.

That debate is stronger in certain countries. I envisage there will be continuing tensions – justification and/or legitimate interest will be important. Clearly, there can be efficiency and experience improvements when using facial recognition technology – just look at facial scanning at airports or ease of entry to stadium events using the technology – something being deployed at the 2020 Tokyo Olympics, for example.

It also has benefits if you are happy to be recognised by face and have the right marketing personalised for you and pointed at you. There are quite different views forming with some people really supportive or the benefits. Crucially, right now, there is work to be done. The technology being talked about is getting impressive results for some people, particularly white males. However, the results drop for others and this currently brings a risk of false identification, particularly for women and ethnic minorities.

Whilst this is likely to be in large part to the data rather than the technology providers and AI systems, if we are to build confidence fast in this area, it is essential we see more good news stories. I know some of the leading technology companies in this area have been very clear they have been taking steps to improve accuracy and re-train (including using new data sets), I would hope the results will show continuing improvements so that bias and inaccuracy isn’t a talking point.

My view, frankly, is this area will continue to be widely debated depending on what side you sit on the civil liberties fence and until the results are tangibly improved across many systems rather than the few. No doubt, however, this technology has huge application for law enforcement, health and marketing and retail, amongst other areas. If we can get the legal and ethical “guide-rails” right on the first two, adoption seems inevitable.

For the third, like much any marketing/personalisation, mandatory consumer laws must be followed in the countries in which they apply and AI systems will need to account for this, including how the systems might ensure they don’t capture individuals who have relied on legal rights to “opt out” or haven’t “opted in” to this personalisation (depending on the country’s laws).

The infamous COMPAS case highlighted the problems of bias in algorithms – how can society prevent bias in legal algorithms?

Society can help seek to prevent bias in algorithms and also bias in AI adoption by introducing laws and/or guidance on AI programming (like Germany did in relation to autonomous vehicles); introducing ethics guidance and laws around the use of AI as well as rules around transparency and use of (especially) personal data. Some countries are already doing this. Like with any decision or action, it is critical the person behind it does not introduce or amplify bias.

Ultimately, companies providing AI and adopting AI must ensure that AI does not contain incorrect logic, either at build level (biased programming) or in the way it is trained or by using distorted and/or insufficient input data. It will be interesting to see which laws come into force calling for auditability and transparency as, arguably, these seek to allow individuals the ability to challenge potential bias, but they do put pressure on AI providers, especially where the systems are proprietary. There will be much more to come in this area.

The Law Society has recommended making a national register of algorithms in the criminal justice sector. Do you agree with this approach? What challenges does it face?

No doubt, there are pros and cons. The efficiency and, if transparent, auditability of the systems is tangible. I agree with many of the sensible recommendations in the UK Law Society’s report, but implementation needs to be thought through carefully. For example, whilst, the recommendation that there would be a record of the datasets that were used to train may help build confidence and transparency, it seems ripe for challenge if they are not the right or large enough datasets.

Also, if the Information Commissioner’s Office is to take a pro-active role in examining algorithmic use then this would need to be carefully implemented with employees with the right skills leading this and by ensuring there is sufficient resource to deal.

It is interesting to see some country’s courts taking a strong view against the use of algorithms in certain cases – for example, in France, laws are already being introduced to protect opening the justice system to certain AI prediction and analytics.

If an AI system used by a law firm ‘makes a mistake’ whose fault is it? How should insurance companies handle this issue?

It depends on the facts at the time. Ultimately, a lot of this will come down to what the contracts say between the various parties. We should remember AI often plays a supporting role for legal advice currently.

The AI platforms that law-firms can procure and use vary enormously. There can be very bespoke platforms where the law-firm has specified what “good” looks like. Currently, there is more adoption where the AI provider has already developed some “out of the box” training based on data sets which the client and law-firm have not provided. Those need careful diligence before being used by the law firm and, in my view, need to be rigorously tested by the law-firm prior to using them with client data – something I know Eversheds Sutherland takes very seriously.

The lawyers who then continue to train the AI platform need themselves to be well-trained so that errors do not creep in, know the law, as well as how to train the AI platform/system properly.  Even once the right AI platform is chosen and careful diligence and training takes place, it is essential to build in human intervention at the end to check the results and for quality control. Ultimately, if there is a mistake, as mentioned above, it comes down to basic law and what each contract between the various parties’ states.

This is no different to the provision of law and legal services now. Whilst human intervention is applied at the end, I would hope that the AI improves the end result for the client but that there is a step of quality assurance mentioned above that means there is hopefully less risk in mistakes in any event.

Is any country getting AI regulation right?

Yes and no. A number of countries such as the UK, the UAE, China, Japan, India, France and Germany and country partnerships, including the  EU are trying to get it right, with all of those named releasing country (or EU-wide) strategies to promote the use and development of AI and related guidance and policy. We are seeing a plethora of ethics guidance and provision in the sphere of privacy, including from the USA.

We need to go back to basics. More clarity is needed in a number of legal areas; for example, around intellectual property, and the employment position if AI replaces in whole or in large part a task a person could do. Also, more rules around transparency and automated decision making (whether or not involving personal data) to assist with accountability. And, of course, we need explicit laws around the deployment of AI in weapons and warfare, a widely debated area.

At the same time, regulations must not stifle all development and creativity and need to be flexible for the future. There are benefits to AI adoption: discovery of new cures for disease, the ability to improve efficiency in food production, a better ability to defend from war and cyber-threats, the saving of lives potential of autonomous vehicles and much more.

Ideally (and this is often where the law falls down), there needs to be more consistency in regulations across the world. Without this, inevitably, companies with larger resources will be the creators and adopters of AI, and many SMEs may not see the business benefits (at least initially) for some time.

About the author: 

Charlotte Walker-Osborn is a partner within the commercial group of global law-firm Eversheds Sutherland and the international head of technology sector. She has been a dedicated technology lawyer for 20 years, advising on all aspects of technology law as well as focusing on cyber-security and telecoms law.

Charlotte has significant expertise leading traditional technology/outsourcing projects and international/UK cutting-edge digital, cloud, IoT, Artificial Intelligence, Connected/Autonomous Vehicles and Additive Manufacturing projects.

Charlotte is a British Computer Society-Information Security Specialist Group committee member. She acts for many global, FTSE 100 and fast-growth companies, including in the technology, consumer, diversified industrials, utilities, transport and healthcare sectors.