Yesterday, at the Council of the European Union, Member States adopted the AI Act to regulate Artificial Intelligence systems. This step marks the final adoption of this legislation under discussion since 2021, and initially presented as an instrument to protect rights and freedoms in the face of the AI steamroller. In the end, far from the initial promises and emphatic comments, this text is tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control. Largely based on self-regulation, crammed with loopholes, the AI Act will prove largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI.
The story had begun with fine promises. In April 2021, when presenting its proposal for a regulation on Artificial Intelligence, the European Commission wrote to reassure us: it would pay the ” utmost attention” to the European petition calling for a ban on facial recognition. The text presented a few days later by the Commission was accompanied by a proposal for a moratorium on some of these uses, in the midst of a package of measures designed to bring a modicum of order to a booming AI market.
Two years later, the European Parliament pushed the text a notch further, extending various bans relating to police uses of AI techniques, or against “social scoring” systems. But after a series of stumbling blocks from national governments, chief among them the French authorities, little or nothing of the initial promises remains. At the end, we are left with a “legislative package” that is certainly voluminous, even talkative, but also very vague. The regulation is largely based on self-regulation and will prove incapable of protecting the public interest in the face of the programmed proliferation of AI systems.
A law for the proliferation of AI
As we had anticipated, rather than raising fundamental rights safeguards, the AI Act aims to demultiply data production and collection to the benefit of industry. European legislations such as the European General Data Protection Regulation (GDPR), which came into force in 2018, find themselves partly weakened by these new special rules applicable to AI systems.
But for the public and private actors who pushed for the adoption of this text, the priority issue was never to protect fundamental rights. The regulation is based on a “risk-based” approach, classifying the types of AI systems or their areas of application (police, health, employment, etc. ) according to four risk categories (low risk not targeted by the regulation; limited risk for certain systems subject only to the transparency measures set out in Article 50; high risk subject to the regulation’s various obligations; unacceptable risk for the few practices prohibited by Article 5).
Through this legislative edifice designed to regulate a rapidly expanding sub-sector of the tech industry, and at a time when controversies surrounding AI are multiplying – whether in terms of its dehumanizing effects on work, its environmental cost, its impact on fundamental rights -,the aim is above all to set the conditions for greater “social acceptability ” of these technologies in order to better impose their deployment.
In this process, Emmanuel Macron and the French government have played their part to perfection. Busy promoting the Startup Nation, obsessed with the race for innovation to better attract foreign capital, unafraid of conflicts of interest and other forms of corruption during the settlement negotiations, the French government indeed bears a historic responsibility for this denial.
The French strategy is now being pursued through the bill on the “simplification of economic life”, which aims in particular to speed up procedures for building data centers by sidestepping local challenges. Microsoft will be one of the main beneficiaries, having just announced a 4 billion euro investment to expand its storage and computing infrastructures in France. And it was also Microsoft that recently set its sights on Mistral AI, promoted as the national AI champion, by announcing a partnership agreement with the French startup as soon as the political agreement was reached on the text of the AI Act. The argument of digital sovereignty put forward by France to lower the requirements of the AI Act is therefore entirely misleading, since in practice the government is quite happy with the strengthening of the dominant positions of foreign tech multinationals.
Under the regulation adopted yesterday, the companies that design AI systems will be primarily competent to pronounce on the level of risk posed by their products (article 43). They will then be free to consider that these systems are not risky and therefore exempt from any regulation under the regulation. Otherwise, private certification bodies will be required to issue attestations of conformity1. Generally speaking, the forms of regulation promoted by the Act essentially amount to self-regulation and the monitoring of “good practices”, all under the aegis of private law standardization bodies (CEN-CENELEC at European level or AFNOR in France) within which the industry will have all latitudes. And while the text provides for some transparency, it remains subject to “trade secrets”, whose protection is stressed across the legislation. In other words, the debate as to whether France’s CNIL, Arcep or Arcom will be responsible for overseeing the application of the regulation is not a major issue.
In another sign of the focus on business interests, chapter VI contains a large number of regulatory mechanisms supposedly “supporting innovation”, to enable the private sector to drive its AI models with enhanced support from public authorities.
The freewheeling Technopolice
If it was clear from the outset that the risk-based approach promoted by the European Union to regulate AI was designed to promote a market rather than protect fundamental rights, at least we could hope that the most dangerous uses of AI systems, along the lines of facial recognition, would be banned for carrying “unacceptable risks”. In any case, this was a demand from dozens of NGOs and more than 250,000 European citizens united in the Reclaim Your Face coalition, and what the Commission initially hinted at.
Here again, with the complicity of other EU players, France has worked to keep any claims in this area at bay, leaving a free hand to police forces and the surveillance industry. An analysis of the text suffices to illustrate this point. Article 5, which lists the various prohibitions that remain, does indeed outlaw “the use of real-time remote biometric identification systems”. Article 5.2 here seems to cover many algorithmic videosurveillance (AVS) applications. But, on the one hand, any use of such systems can escape this narrow wording as soon as it is conducted not in real-time. And here again, exceptions drastically reduce the scope of the ban: The use of real-time facial recognition will be possible to find victims of “kidnapping, trafficking or sexual exploitation of human beings, as well as to find missing persons”, or to prevent “a specific, substantial and imminent threat to the life or physical safety of natural persons ” (article 5.1.h). It is also authorized to prevent “a real and present or real and foreseeable threat of terrorist attack”, but also as part of criminal investigations to find suspects of a whole range of offenses punishable by more than four years’ imprisonment, including sabotage, organized crime, murder, and many others. These are the kind of feet in the door that are likely to be extended to new situations in the future.
What’s more, in this area, intelligence agencies and the military are freed from all constraints. This is also the case for scientific research teams, who will be able to “innovate” at will. Article 2.3 on the scope of the regulation states that it does not apply to AI systems developed for “scientific research purposes”, nor to those used for “military, defense or national security purposes”. Yet another set of holes in the racket.
In practice, all the forms of police AI we’re fighting against as part of the Technopolice project seem to be permitted under the regulation, from AVS to predictive policing. At most, they could be considered “high-risk”, as falling within certain sensitive application areas defined in Annex III2. They will then be subject to enhanced transparency and standardization obligations: those responsible for these systems will thus have to identify, assess and address “reasonably foreseeable risks (…) to health, security or fundamental rights” (article 9.2), implement good data governance practices (article 10), and keep records linked to the activities of their systems (article 12). Standardization and self-regulation, all under the aegis of benevolent public agencies, will therefore be in order.
The transparency of high-risk systems will remain extremely limited when it comes to police surveillance tech. While high-risk systems must be listed in a public register, law enforcement and immigration agencies have been granted a special exemption (articles 49.4 and 71). Nor will they have to publish impact studies, which is normally mandatory for high-risk systems.
Finally, even when they are indeed classified as high-risk, yet high-risk AI systems must not escape the other leonine derogation added to the text. A “filter” defined in article 6. 3 in fact provides that the ad hoc obligations do not apply when the AI systems under consideration are intended to “perform a narrow procedural task”, to “improve or optimize” the result of a ” human task ” already performed, if they only perform a “preparatory task”, or if more generally, although theoretically falling within the high-risk applications defined in Annex III, they “do not present a significant risk of harm to the health, safety or fundamental rights of natural persons”. All these legal notions seem particularly broad and leave the door open to all kinds of abuse, especially when left to the discretion of private players.
As for predictive policing systems based on the calculation of risk scores by geographical area, whose dangers in terms of discrimination we recently highlighted, it seems that they escape the very restrictive definition proposed on the subject in Annex III relating to high-risk systems, and therefore to the obligations laid down for this category3.
Algorithms of social control and shameless greenwashing in ambush
With regard to “social scoring”, also put forward as one of the most ambitious proscriptions posed by the text and which appeared to be a way of preserving Europe’s good conscience in the face of the “Chinese-style” digital dystopia, it’s another cold shower. “Social scoring” – for example, that practiced by the Caisses d’allocations familiales in France, which we have been documenting for months – remains permitted as long as, like the systems tried out in China, it is not part of a centralized system capturing data from different social contexts (police, work, school, etc.) (recital 31). Sector-specific scoring systems used by welfare institutions will continue unabated: although it classifies them as high-risk applications, and apart from a few procedural obligations already mentioned, the AI Act does nothing to effectively combat discriminatory algorithms used by social administrations in rating and spying on welfare recipients.
And then there are the many statements of principle whose contradictory nature should nonetheless be obvious. One thinks in particular of the pious wish that “AI systems [be] developed and used in a sustainable and environmentally friendly manner” (recital 27). Or the tenacious idea that ever-increasing computerization will make it possible to meet the challenge of climate change (recital 4), while at the same time – for instance through the aforementioned simplification law –, the construction of energy-intensive data centers is being encouraged at every turn, and mining is being revived to fuel the “digital transition”. This is outrageous but hardly surprising since among technocrats and other high-tech apostles, AI is now a key element of any greenwashing discourse.
The only environmental impact rules posed by the AI Act consist in the release, by high-risk AI system developers seeking certification of their products, of information on processes aimed at “improv[ing] AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and of other resources during its lifecycle,” and on “the energy-efficient development of general-purpose AI model” (article 40). Similarly, under Annex XI, developers of generative AI based on big data models, such as OpenAI, will also be required to provide information on “known or estimated energy consumption” related to the training and use of these models. With the whole thing again overseen by standards bodies, such exercise should be a mere formality with perfectly cosmetic effects.
At the end of the day, far from protecting the values of democracy, the rule of law and respect for the environment that the European Union still claims to embody like a beacon in the night, the IA Act remains the product of a disastrous realpolitik. Faced with the stranglehold formed by China and the United States, it primarily aims at enabling Europe to make a come-back the race for the latest digital technologies that are deemed to be the true yardsticks of power. Not only does this race appear to be lost in advance, but the European Union is thereby helping to legitimize a techno-solutionist gold-rush that is dangerous for fundamental rights and ecologically unsustainable.
The net effect of the generalization of AI, as a technical and political paradigm, is to multiply the damage caused by the over-computerization of our societies. Since it is now clear that the AI Act will be of no use in curbing the current drift, we will have to collectively consider other means of struggle to disrupt that process.