How is algorithmic video surveillance illegal?

Algorithmic video surveillance (VSA) is installed in our cities, in complete opacity and above all, in complete illegality. For several years, we have been trying to fight against these devices, in particular by attacking them in court. We believe that the rules in force make it possible to oppose the deployment of these technologies to protect against the infringements of the freedoms they entail. However, the European Commission is pushing for the adoption of new rules, in favor of manufacturers, to regulate “artificial intelligence” devices, including among others the VSA. In its wake, the CNIL pleads for a new specific framework. In our response to its consultation, we explained why the current protective rules should not be abandoned for new sectoral rules. Here is a summary of our arguments (see our full position here).

Personal data law provides special protection for so-called “sensitive” data in view of the particularly intimate information they reveal (such as political or sexual orientation). Among these sensitive data, there is the category of so-called “biometric” data, which are “personal data resulting from a specific technical treatmentrelating to physical, physiological or behavioral characteristics of a natural personwhich allow or confirm its unique ID “.

This definition can be broken down into three elements that are systematically found when talking about VSA.

First, the data must be subject to a specific technical treatment.

This makes it possible to include VSA systems since they intervene in addition to the general treatment which consists in filming the public space and pursues a particular objective (see below) . Also, the technical treatment is specific in that it consists of the implementation of a algorithm or computer program applied to video streams in order to isolate, characterize, segment or even make apparent information relating to a natural person filmed or to extract from the video stream, even a posteriori, data concerning this person.

Then the data must relate to the characteristics physical, physiological or behavioral of somebody.
All these data are the ones that the VSA captures:

  • physical or physiological information may relate to the body of a filmed person in the broad sensesuch as faces, silhouettes or any isolated characteristic of the body, such as hair color, skin color, eye color, face shape, height, weight, age;
  • behavioral data is all about information relating to the action of the body in the environment and space. A garment or an accessory worn by the person at a time T, a gesture, an expression of emotion, a direction of movement, a position in space and time (sitting, standing, static, pace a walk…).

Finally, the purpose of the processing must be theunique identification of the person. According to the European Data Protection Board (EDPS, the authority that brings together the European CNILs), this function is not limited to revealing the civil status of the person but to individualize it within an environment to recognize it on several images.

Regarding the VSA, each system is programmed to bring together specific elements (silhouette, color of clothes, position, direction, behavior) for:

  • to acknowledge a person on several images or several video streams, either in time or in space, by assigning him a digital fingerprint which will make it possible to characterize its attributes or behavior, and isolate it on pictures. The most typical example is the follow-up of a person in the public space filmed by several cameras;
  • perform a targeted action on the person through information on physical or behavioral characteristics obtained by the VSA. This information can be transmitted to agents in the field, it will allow them to “recognize” the person in a unique way and to perform an action on him (“the man in the blue hat is in the main street, check him” ).

In both cases, the person is uniquely identified with respect to their environment, a group of people or a scene.

In conclusion, the functionalities of VSA systems relating to people will systematically involve the processing of biometric data.

Once the processing of biometric data has been demonstrated, the stronger protection given to sensitive data may apply. Thanks to this specific framework, sensitive data can only be processed if a requirement of “absolute necessity” is met.

In practice, this requirement means that processing will only be considered lawful if there is no no other means less infringing on freedoms that would make it possible to achieve the objective pursued. This requirement of absolute necessity is not a legal novelty and has already made it possible to limit or prohibit the most intrusive technologies.

For example, when the PACA region tried to set up a facial recognition experiment at the entrance to two high schools, the CNIL judge that the purpose of securing and streamlining entrances into high schools “can unquestionably be reasonably achieved by other means”concluding that the device was disproportionate.

Similarly, in a warning to the city of Valenciennes revealed by Mediapartthe CNIL had judged that the VSA system put in place by the city was disproportionate, in particular because the need had not been proven and the absence of an alternative had not been documented.

The Council of State made the same reasoning when we attacked, alongside the LDH, the use of drones by the police during demonstrations. For the judges, the ministry provided “ no evidence to establish that the objective of guaranteeing public safety during gatherings of people on the public highway could not be fully achieved, in the current circumstances, without the use of drones “.

Finally, this mechanism has also been effectively mobilized against so-called “classic” video surveillance – and not biometric – in the municipality of Ploërmel, the city not justifying, according to the Court of Appealany statistics or evidence of particular risks that would explain the need for this device.

In this case, concerning the police VSA, there will be there are always other ways to ensure safety other than through automated technology monitoring the behavior of individuals on the street. We talked about it in particular in our article explaining the political reasons to oppose the VSA, the security of people can only be found in human and social action, attention to others, care.

The balancing required by the proportionality check therefore makes it possible to limit and exclude any abusive VSA device since the invasion of privacy caused by the processing of biometric data can only very rarely, if ever, be assessed as strictly necessary to achieve the desired objective. This criterion of absolute necessity is therefore today a documented legal mechanism and
effective in prohibiting the misuse of technologies by the police in public spaces.

Through the project of regulation on artificial intelligence as well as the stated intentions of the leaders to modify the current framework to favor the industrial and economic interests of the sector, it is a destruction of the protective base of our rights that is carried out.

These players are trying to defend an approach no longer based on necessity as described above, but now on risks: the legal framework would not be unique as is currently the case, but different depending on the objectives and purposes of the technologies. In other words, this would imply authorizing more or less widely the use of certain technologies according to the actual risks that they would pose to the rights and freedoms of the population.

For example, in its draft regulation, the Commission proposes a classification of the uses of facial recognition and VSA according to the circumstances of their application (in the public space, in real time, for law enforcement purposes, etc.), little whether they are necessary or not. It is a total reversal of the way our rights and freedoms are protected, as we explained a few months ago. It would be up to the persons concerned to demonstrate the damage caused to them and no longer up to the public authorities implementing these technologies to systematically demonstrate that the use is not disproportionate. The burden of proof would be reversed, to the detriment of our freedoms.

However, it is not enough for a technology to be “low risk” for it to become “necessary” or even desirable. Above all, these players try to justify this logic by arguing that guarantees would make it possible to limit these risks. Such mechanisms are illusory and could never be sufficient to compensate for unnecessary treatment.

We have seen it for several years, warranties are never enough to limit technologies most of the time already deployed, sometimes on a large scale, even though they are not legal. Even if they are contested, they will have already produced their illicit and harmful effects. Impact analyses, the powers of control of the CNIL, the so-called local checks and balances, the rights to information of the public, none of these guarantees prevent the authorities from violating the law.

If the risk-based approach were to end up being adopted, it would give the signal expected by all VSA players to massively deploy all of their systems at full speed. Tomorrow like today only prohibitive measures, based in particular on necessity, can protect us. It is moreover theopinion data protection authorities (European Data Protection Board and European Data Protection Supervisor) on the proposed Artificial Intelligence Regulation, both of which call for a complete ban on VSA technologies.

In conclusion, replacing changing the paradigm by replacing the current approach based on necessity with a new approach based on risks will lead to presenting as potentially lawful processing operations whose illegality today is in no doubt. This change of context would lead to the massive deployment of illicit VSA systems without any guarantee being able to limit the harmful effects for the population. This is why we defend the maintenance of the current legal framework, which allows the prohibition of these practices and is able to protect the population against the abuses of the authorities in terms of surveillance.

We wish to give thanks to the author of this write-up for this amazing material

How is algorithmic video surveillance illegal?


Visit our social media profiles and other pages related to themhttps://www.ai-magazine.com/related-pages/