In November 2019, the National Commission for Computing and Liberties called for facial recognition ” a debate at the height of the stakes “. In April 2021, while Politico unveiled a draft European regulation on artificial intelligencethe debates resumed with a vengeance, 51 associations for the safeguard of individual rights calling for a ban on “biometric mass surveillance”.
It is in this context – and while the Senate has just published a report calling for the creation of a framework for the use of facial recognition in the public space – that the AI Regulation Chair of the University of Grenoble – Alpes publishes a cartography in six chapters of the uses of facial recognition in Europe. 20 minutes interviewed Théodore Christakis, director of the team that worked on this long-term project.
“The current debate is sometimes distorted by a poor knowledge of [la reconnaissance faciale] and its exact operating methods”, declared the CNIL in 2019, which you quote from the introduction of your study. How has this observation influenced your work?
When the CNIL declared that “a debate commensurate with the challenges” was needed, we also noticed that the debates on facial recognition often mixed up a lot of different things. In some discussions, we went so far as to mention emotion recognition or video surveillance, whereas these two tracks are not facial recognition. However, there are real questions to be asked…
The technologies used by PARAFE, when you are at the airport, or by the British police, to spot a person in a crowd, are both based on facial recognition but do not raise the same risks at all.
With my team, we therefore decided to bring our scientific approach to the discussion: our desire was to clarify things from a technical point of view, to detail the existing practices across Europe and to learn from them. to allow legislators, politicians, journalists and citizens to debate calmly.
You offer a classification uses of facial recognition in three main categories: verification, identification and facial analysis – which is not facial recognition in the proper sense, but still plays on the use of facial elements . Why should the three be distinguished?
The first category (in blue in the illustration) is also called authentication. It consists of comparing one image to another: your biometric passport photo with the one PARAFE takes when you walk through the gate at the airport, for example. The machine checks if the two match, if so, it opens the doors, and then it deletes the data. This does not prevent the presence of risks or problematic uses: when this type of technology has been used in two high schools of Marseille and Nice, for example, the CNIL considered that it was not acceptable.
But it’s still different from ID systems, which are only used in the UK at the moment. There, we are talking about cameras that the police place on the road or near a station, which scan the crowd to look for matches with a pre-established list of a few thousand criminals. In such a case, the issues are very different: the individual does not have the power to refuse to be subjected to the technology, the surveillance is carried out without control… That said, this type of technology is also used in experiments as mona, at Lyon airport. There, if the user wishes, he can register his face on his smartphone and then go through all the checks – baggage drop-off, customs, boarding – without ever taking out his boarding pass. He has a choice, so the question, even if it concerns facial identification, is still different from that asked by the British police.
In the third part of your report, which deals with facial recognition in the public space, you emphasize the difference between “consenting to” and “volunteering to” the use of facial recognition technology. What is at stake?
First, it should be emphasized that even if a use is said to be “consensual” or “voluntary”, that does not prevent it from posing a problem. In the case of PACA high school students, for example, it was considered that their consent was problematic because they were under the power of their school. Then, if we take the example of airports again: when you arrive in Paris or Lyon, you can choose to go through the door equipped with facial recognition systems, but you have an alternative. This is what volunteering is: there is always another possible choice. Consent must be given by well-informed people, capable of consenting, etc. (the GDPR provides four cumulative conditions : it must be free, specific, enlightened, unequivocal, note). Subtlety is important, especially when the debate turns in the direction of “ban all facial recognition”. This way of approaching the problem forgets that the technology has useful functionalities: some use it to unlock their smartphone and if others do not want it, they use a pin code. A choice is possible.
Anyway, as a user, these two proposals subject me to a very different risk from when I am subjected to a system that considers me a potential criminal because I crossed a road in front of a camera of the police.
The fourth part of your report deals with the use of facial recognition in criminal investigations. What are the terms of the debate, in your opinion?
There are many different uses for these. Let’s first imagine that there is a robbery or a murder. The criminal was filmed by a live webcam. In these cases, France has legislated to authorize the police to compare the image of the criminal with the Criminal History Processing file (TAJ, whose existence itself is debated, note). It’s facial verification: it raises its own set of questions, but it’s quite different from applying facial recognition algorithms to video streams, as was tested during a carnival in Nice – on the basis of consent – or as is used in Great Britain.
The last part of your study will focus on the uses of facial analysis in the public space, still not very present, but which should multiply, according to you. Why is it important to worry about it?
Mask-wearing detection models like the one offers by Datakalabit is not facial recognition because there is no creation of what are called ” biometric templates “. But it’s still facial analysis, so obviously there’s something to be concerned about. It’s the same for emotion recognition technologies. When it comes to detecting if someone falls asleep at the wheel, it’s very good, it can save lives. But when we tell you that we will allow to detect the personality or the lies, we are almost in pseudoscience! (On this subject, read the chapter dedicated to emotions in theAtlas of AI by Kate Crawford : Théodore Christakis declares himself “completely in agreement” with the analysis of the researcher, note). Make statistics on the wearing of the mask, why not. Putting facial analysis at every job interview is more debatable.
What are your main recommendations, for legislators and/or citizens?
Clarify the debate. Getting informed – that’s why we did this work – but also and above all clearly specifying the cases we are talking about. This will allow us to first look at the most dangerous uses of facial recognition. This is important: the Senate has just submit a report on the question, he pleads for the creation of a European controller, everyone must be able to grasp the question with precision, by taking an interest in each type of use of these technologies. This will also make it possible to better see where there are already laws that allow a minimum framework and where the shortcomings are the most glaring.
We would love to thank the author of this write-up for this amazing content
“We must clarify the debate on facial recognition”
Find here our social media profiles as well as the other related pageshttps://yaroos.com/related-pages/