EU Tests AI Lie Detector At Borders – But What’s Next?

The EU has launched the trial of an AI lie detector system that will use a digital avatar to interview travellers at border posts, ask them questions and then use facial expression ‘biomarkers’ based on previously taught patterns to decide if they are lying.

The €4.5m project is backed by a number of countries, including the UK, with Manchester Metropolitan University playing a key role. Other countries involved include: Poland, Spain, Hungary and Germany, among others. The trial project will run until August 2019 and initially will be tested in Hungary, Latvia and Greece on incoming travellers.

The EU said that the project, called iBorderCtrl, will ‘speed up traffic at the EU’s external borders and ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars [which will use] advanced analytics and risk-based management.’

In short, the EU is experimenting with a machine learning-based system for facial change indicators in the hope of making what would be a legal assessment of whether someone is lying, in this case with regard to immigration.

Given that lying at a border in an attempt to gain entry would likely constitute a criminal offence, then this software has important human rights and justice implications.

It also raises the question of whether such technology could then be used later in other scenarios? If the EU states involved become satisfied it is weaning out illegal migrants, or criminals trying to use fake identities, or perhaps people transporting drugs that have not been detected by conventional means, then the next logical question may well be: where else can we use this? 

Would police in a number of countries wish to use it to do quick ‘on the spot’ assessments of suspects in towns and cities across Europe? For example, the police stop someone suspected of being engaged earlier in an assault. The person denies this. But, the police then produce the AI lie detector and based on the facial expressions decide they are lying. They get arrested and carted off to the station for further questioning.

And would courts want to use it to check whether a person in the witness box is lying or not? Would judges be glad to have some extra support to help decide if someone is perjuring themselves in a trial?

Is that a sci-fi plot line….or is this something quite possible already? Some might say that it’s just the next stage on from using a breathalyser to spot if someone has been drink driving.

In any case, the project is underway in relation to spotting suspects at the EU’s borders. And this is something the EU says it does really need

‘More than 700 million people enter the EU every year. The huge volume of travellers and vehicles is piling pressure on external borders, making it increasingly difficult for border staff to uphold strict security protocols whilst keeping disruption to a minimum,’ said the EU Commission in a statement.

‘We’re employing existing and proven technologies – as well as novel ones – to empower border agents to increase the accuracy and efficiency of border checks,’ said project coordinator George Boultadakis of European Dynamics in Luxembourg.

‘iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit,’ he added.

So, what do you think? Is this a slippery slope? Is this putting way too much faith in a novel form of facial recognition software that clearly has legal implications for the person on the other end of it? Or does this make sense? And perhaps if there are issues with it, isn’t it sensible to trial it as they are doing so and see if they can make sure the system works? Any AI ethics readers out there who have a view? AL would be interested to hear what you think.

2 Comments

  1. There is a distinct difference between a device used to alert investigators that a more through investigation is called for and a device that is determines guilt. Using a device to triage a stream of travelers into greater and lesser threats is quite different than using the same device to determine guilt. It is likely that just as intelligence and special ops groups train agents to “beat the box” there will soon be a cottage industry training people to beat the new device. So while the new device will hopefully improve the identification of low and medium level threats it is unrealistic to expect the device to detect highly trained and elite actors who are presumably far more dangerous than the people most likely to be identified. Additionally, there is the possibility that a sophisticated hacker (state sponsored?) could convince the system to ignore certain people. Hidden deep in the AI code this would be very hard to detect or eradicate. So once again such a system should not be expected to always catch high level, state sponsored or well heeled adversaries.

  2. “…use facial expression ‘biomarkers’ based on previously taught patterns to decide if they are lying…” Oh, really? (sarcasm of ex-psychologist). Ok, let’s wait for the system that would be built on Lombroso’s “theories” to detect possible criminals.

1 Trackback / Pingback

  1. Will Machine-Assessed Lie-Detector Tests Become Admissible in Court? | Amazing Firms, Amazing Practices

Comments are closed.