Today, Artificial Lawyer is joining the campaign calling for the better regulation of facial recognition (FR) technology around the world.
FR goes right to the heart of the law and AI, and encapsulates many of the key issues we face today as a society as new technology provides both social and economic benefits, but also in some cases such as this, creates an unregulated ‘Wild West’ that impinges on civil liberties.
Banning FR totally is not feasible, therefore the best approach is to encourage sensible regulation. There are many parties already working hard in this area and Artificial Lawyer would like to lend its support to their efforts.
To kick things off, here is a piece by News Reporter, Irene Madongo, on what is currently happening with FR and why it matters.
The Facial Recognition Landscape Today
There are few dull moments in the world of facial recognition (FR), it seems. For a while now, the sector has stirred up deep concern, outrage even, over the use of FR technology by authorities and private firms on the unsuspecting public.
Proponents of FR, however, point to its benefits, which they say includes helping police forces identity criminals and solve cases.
In recent weeks, however, the FR debate seems to have taken a new turn, judging by developments on the regulatory front.
According to the DPA, the municipality had violated aspects of the EU General Data Protection Regulations (GDPR).
The exercise was rolled out as a trial to see if FR could speed up attendance reporting, which the local authority reportedly said took 17,000 hours of teachers’ time.
The regulator, however, found that facial recognition meant camera surveillance of the students in their everyday environment, was an ‘intrusion on their integrity’ and that ‘presence control can be done in other ways that are less privacy violating than facial recognition’.
The school said it had obtained the students’ consent for the project, but a DPA lawyer said it could not use consent in this case.
Regulators bare their teeth
The same month it was the turn of British regulator the Information Commissioner’s Office (ICO) to announce it would be investigating the use of FR at the development around Kings Cross station, the London transport hub. Concerns surfaced following reports that a face-scanning system was deployed there.
The ICO said: ‘Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all.’
Kings Cross developer Argent reportedly claimed the technology was to ‘ensure public safety,’ but did not comment on issues such as the legal basis for the cameras’ use or systems it has to protect the collected data, according to the BBC.
Announcing the investigation, Information Commissioner Elizabeth Denham expressed ‘deep concern’ over the use of FR in public places by both the public and private sector.
‘As well as requiring detailed information from the relevant organisations about how the technology is used, we will also inspect the system and its operation on-site to assess whether or not it complies with data protection law,’ she said. ‘Put simply, any organisations wanting to use facial recognition technology must comply with the law – and they must do so in a fair, transparent and accountable way. They must have documented how and why they believe their use of the technology is legal, proportionate and justified.’
London Communications Agency, the firm handling Argent’s press queries declined to answer questions fielded by Artificial Lawyer, saying it was not answering any questions ‘at the moment’.
‘Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all.’
But it emailed a statement saying: ‘King’s Cross is working collaboratively with the Information Commissioner’s Office on the inquiry it has announced, and will comment further in due course.’
There have also been media reports (in the FT) that European Union (EU) officials are looking into rolling out new FR rules which would give citizens ‘explicit rights’ over the use of their data and restrict the ‘indiscriminate use’ of FR.
An EU spokesman, however, has denied the claims of such a development, telling Artificial Lawyer that the story was based on a leaked document. ‘No such project or plans exist. Draft internal brainstorming documents of the services should never be confused with policy,’ the statement to Artificial Lawyer said, and pointed to ongoing EU work on AI, which includes guidelines towards a human-centric approach on AI.
With tensions rising it’s inevitable perhaps then that FR cases would show up in the courts sooner or later. In the UK, two cases of interest have emerged.
In what is considered to be the world’s first legal challenge to police use of FR technology, British judges have declared that a police force was right to use FR when they were challenged by Cardiff resident Ed Bridges, who argued the force had violated his privacy.
Bridges’ face was scanned while he was doing his shopping in 2017 and also when he participated in a ‘peaceful’ anti-arms protest a year later.
South Wales Police had denied the privacy violation allegations, saying the technology ‘was used in the same way as photographing a person in public.’
In its findings, the High Court found the force had ‘followed the rules’ and that their ‘use of [FR] was justified,’ according to the BBC.
According to Liberty, although the Court found the use of FR ‘currently lawful,’ it also ‘said facial recognition interferes with the privacy rights of everyone scanned by a camera, and 500,000 people may have been scanned by South Wales Police. [And] the Court found the current legal framework governing facial recognition to be adequate, but said that it would have to be subject to periodic review’.
Rosa Curling, solicitor at Leigh Day, representing Big Brother Watch and Baroness Jones in the case said in July: ‘Our clients have provided compelling evidence to the court, showing that the use of automated FR by the Metropolitan Police is contrary to Articles 8, 10 and 11 of the European Convention on Human Rights.’
Announcing the challenge in July, Big Brother Watch director Silke Carlo stated that FR cameras are ‘authoritarian’ and ‘dangerously inaccurate.’
‘When the police use facial recognition surveillance they subject thousands of people in the area to highly sensitive identity checks without consent.’
The London police force has already made headlines with its use of FR, including reports that it fined a man in Romford £90 after he covered his face to avoid FR technology, to news that researchers at Essex University found that suspects flagged in its FR trial were actually ‘innocent.’
The Met Police has denied the claims and refuted the research findings. And, Deputy Assistant Commissioner Duncan Ball said of the research findings: ‘We are extremely disappointed with the negative and unbalanced tone of this report … The Met’s approach has developed throughout the pilot period, and the deployments have been successful in identifying wanted offenders.’
The force declined to comment on Big Brother Watch’s legal action, saying it does did not comment on ongoing court cases.
So far, San Francisco, and it is understood two other US cities, have banned city departments from using FR.
Could cities in the UK and across the EU up the ante and follow suit? Not yet, if at all, it seems. Also, the fact that regulators are taking action, may not in itself be the huge deterrent campaigners are crying out for.
For a start, FR is still in its early stages and the regulators’ position is not yet totally clear, it seems.
The proof of the pudding may be shown with the type or size of action they take.
The Swedish DPA’s 200,000 Swedish Krona penalty, which amounts to roughly £16,800 or $20,700, may be considered by some to be a relatively small fine, a mere slap on the wrist for a ‘serious’ violation.
Lessons from the financial services sector show that banks began to think more seriously about changing their bad conduct after being hit by eye-watering penalties by US authorities and also tighter and new regulatory reforms.
Although, of course, with FR, it has not been decided by law makers in most countries yet if its widescale use in public places poses a risk to civil liberties, or needs further regulation beyond what is already available.
Other future indicators
The rulings from some of the cases above show that this has become a real legal issue now. But, it’s still one where much is uncertain. And, naturally, judges and regulators can only enforce the law as it currently stands. It’s ultimately up to electorates and their lawmakers to decide what kind of world we want to live in and then legislate for it.
For anti-FR campaigners their demands are likely to increase against what they see as the unregulated and often blanket use of FR technology regardless of where the law currently stands today. Clearly this battle has only just begun and there is much more to come.
If you share an interest in this area and would like to get in contact, please do. Artificial Lawyer is keen to hear from lawyers, regulators, campaigners – as well as the companies and public bodies involved in the use of facial recognition. Contact: firstname.lastname@example.org