Why is AI video surveillance illegal?

Posted on


Artificial intelligence-assisted Video Surveillance (or “AVS”) is creeping into our cities, without transparency and completely illegally. For several years, we have been fighting against these technologies, in particular by challenging them before the courts. We believe that these technologies infringe upon our freedoms, and that we can use existing laws to block their deployment. However, the European Commission has been pushing for the adoption of new rules, favouring manufacturers, to regulate devices using “artificial intelligence” including AVS. In its wake, the CNIL (Commission Nationale Informatique et Libertés, the French Data protection supervisory authority) has advocated for a new, specific legal framework (in French). In our answer to the CNIL’s public consultation, we explained why the current protective rules and safeguards should in no case be replaced by new, sector-specific rules. These are our arguments (see a full explanation of our position here – in French.

Biometric data at the core of data protection

Data protection legislation provides special protection for data considered “sensitive” due to the intimate nature of the information it reveals (such as political opinions or sexual orientation). This category of data includes “biometric” data, i.e., “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification1Definitions provided by article 4(14) of the GDPR and 3(13) of the Law Enforcement Directive of that natural person“.

This definition can be divided into three elements, which are systematically present when AVS is concerned.

Firstly, the personal data must be the object of specific technical processing.

This includes AVS systems since they supplement a general processing (filming public spaces) and have their own purpose (see below). Thus, the processing carried out is specific because it implements an algorithm or software applied to video streams to isolate, characterise, segment or make apparent information relating to a natural person being filmed, or to extract data about that person from the video stream, even after the fact.

Secondly, the data must relate to the physical, physiological or behavioural characteristics of a natural person.

AVS does indeed record such data:

  • Physical or physiological information includes any information pertaining to a person’s body broadly defined, such as faces, silhouettes or any isolated characteristic of the body, such as hair, skin or eye colour, the shape of the face, height, weight, age ;
  • behavioural data includes any information pertaining to a body’s movements in an environment and through space. A piece of clothing or an accessory worn by a person at a specific point in time, a gesture, expressing an emotion, the direction of a movement, a position in space and time (sitting, standing, static, pace…) can all be considered biometric data.

Finally, the processing must allow or confirm the unique identification of a person. According to the European Data Protection Board (EDPB, which includes the European national data protection supervisory authorities), this means the ability to single out that person in an environment and identify them in several image, without necessarily revealing that person’s identity2See the Guidelines 3/2019 on processing of personal data through video devices, version 2.0, par. 82 p. 19 .

In the case of AVS, each system is programmed to combine specific elements (silhouette, colour of the clothes, position, direction, behaviour) in order to:

  • recognise a person in several frames or video streams, i.e., either over a period of time or in a given space, by attributing a digital print to that person which will allow the AVS system to characterise their attributes or behaviour and single this person out in the images. The most typical example is several cameras following a person walking in public space,
  • using information obtained by the AVS about their physical or behavioural characteristics. This information can be sent to agents on premise, who can then “recognise” the person uniquely and carry out an action on them (e.g., “the man with the blue hat is in the high street, control him”).

In both cases, the person is uniquely identified within an environment, a group of people or a scene.

In conclusion, the functionnalities of an AVS system when used on people will systematically require the processing of biometric data.

AVS is always disproportionate

Once it has been demonstrated that AVS systems process biometric data, the strongest level of protection, grantec to sensitive data, applies. Thanks to this specific framework, sensitive data cannot be processed unless the processing is “strictly necessary”3See article 10 of the Law Enforcement Directive.

Practically speaking, this requirement means that processing is only lawful when there are no other means less detrimental to individual freedoms to reach the same goal. This requirement of strict necessity is not new and has already been used to limit or ban the most intrusive technologies.

For example, a few years ago, the Provence-Alpes-Côte-d’Azur region tried to implement an experimental policy of facial recognition at the entrance of two high schools. The CNIL ruled that the aim of securing entrances and regulating traffic into the schools “could undeniably be reasonably achieved through other means”, deducing that the measure was disproportionate.

Similarly, in a warning to the city of Valenciennes revealed by the French news website Mediapart, the CNIL considered an AVS system implemented by the city to be disproportionate, in particular because the city had not proven the system’s necessity nor documented the absence of alternative options.

The French administrative High Court, the Conseil d’Etat, followed the same reasoning when we challenged the use of drones by the French police in demonstrations, alongside the Ligue des Droits de l’Homme (a fellow NGO). The judges of the Conseil d’Etat ruled that the Ministry of Interior did not provide “sufficient elements to establish that the aim of ensuring public safety during gatherings in public spaces could not be fully achieved, under current circumstances, without using drones4Conseil d’Etat, 446155, 22 December 2020, par. 11.

Finally, this legal mechanism has also been successfully invoked against “classic” video surveillance – that is, non-biometric – in the township of Ploërmel, in Brittany. The Court of appeal ruled that the township did not produce any statistics or evidence of specific risks that would explain the necessity of video surveillance.

In the case of AVS used for policing, there will always be other means of ensuring public safety that do not involve automated technology monitoring the behaviour of individuals in the street. As discussed in our article explaining the political reasons to oppose AVS, we believe the safety of all can only be achieved through human and social action, attention to others and care.

The balance of interests required by the proportionality test limits and excludes all unreasonable AVS systems since the infringement on privacy resulting from the processing of biometric data will only very rarely, if ever, meet the requirement of strict necessity to achieve the aim pursued. This “strict necessity” requirement is therefore at this time a well-documented and efficient legal mechanism to prohibit the abusive use of technologies by the police in public spaces.

The paradigm must not change

The proposal for an Artificial Intelligence Act and political leaders’ openly displayed goal of changing the current legal framework to favour the private interests of the surveillance industry are an attack on the foundations of our rights and their protection.

These actors defend an approach that is no longer based on necessity, as previously described, but on risks: the legal framework would no longer be all-encompassing, as it is currently, but would differ depending on the goals and purposes of the technologies. This means that the use of certain technologies would be more or less widely authorised, depending on the effective risks these technologies would represent for the rights and freedoms of the population.

For example, in its draft regulation, the Commission suggests a classification of the uses of facial recognition and AVS depending on the circumstances of their use (in a public space, in real time, for policing purposes…), regardless of whether these technologies are necessary or not. This is a complete reversal of the way our rights and freedoms are protected, as we explained a few months ago. Data subjects would have to prove the damage they have suffered, instead of public authorities using these technologies having to systematically prove that this use is not disproportionate. The burden of proof would be reversed, to the detriment of our freedoms.

Indeed, just because a technology is “low risk” does not mean it is “necessary”, nor even desirable. Economic actors are attempting to justify this logic by claiming that safeguards would limit any risks. Such safeguards are illusory and would never be enough to prevent unnecessary processing.

As we have seen over the past few years, safeguards are never enough to limit technologies that have, in most cases, already been implemented, sometimes on a big scale, without even being lawful. Even if the use of these technologies ends up being challenged, they will already have produced unlawful and damaging effects. Impact assessments, controls by supervisory authorities such as the CNIL, supposed local counterweights, the public’s right to information: none of these safeguards prevent authorities from breaking the law.

If the risk-based approach ends up by being adopted, it will give the much-awaited signal to the entire AVS sector to massively deploy their systems at breakneck speed. Tomorrow as much as today, only prohibitions, in particular based on necessity, can protect us. In fact, this is the opinion of the European data protection authorities (the European Data Protection Board and the European Data Protection Supervisor) on the draft of the Artificial Intelligence Act: both authorities have called for the absolute prohibition of AVS technologies.

In conclusion, changing paradigm by replacing the current approach based on necessity with a new risk-based approach would present processing that is unequivocally unlawful as potentially lawful. This paradigm shift would result in a massive deployment of unlawful AVS systems, the damaging effects of which no safeguard could protect the population from. This is why we advocate for upholding the current legal framework, which prohibits these practices and is able to protect the public from abusive surveillance by the authorities.


This article has been translated by our volunteer group. Warm thanks to them all <3

References[+]