Whether you’re retired, a student, unemployed, a parent or a health insurance beneficiary: whatever your situation, your most intimate data is fed to dozens of algorithms whose job it is to assess your integrity.
Through the exploitation of our most intimate data (health data, family life, professional situation…), these algorithms continuously compare us to lists of “profile-types” of suspects : lazy unemployed, ill-intentioned precarious worker, single mother, retired person fond of travelling, lying disabled person, dishonest sick person.
Developed in total secrecy in the name of the “fight against fraud” by the social security authorities – the Family branch of the french welfare system (CAF), the French Unemployment Agency (Pole Emploi), the French Health Insurance (Assurance maladie), the French public pension system (Assurance vieillesse), the French Social Security for farmers (Sécurité Sociale Agricole)… – each of these algorithms assigns us a risk-score – or “suspicion score” – which is used to select which of us should be subject to checks.
We are witnessing the advent of a true “liberal” version of the social credit system. Far from the idea of an authoritarian regime assigning a single score to each citizen, on which all our interactions with administrations would be based, the control’s logics at work are more refined and pernicious.
Read more
Documentation of these practices is made and complicated by the refusal of administrations to provide any information whatsoever which we interpret as a sign of widespread embarrassment. We are fighting against this opacity and will publish information here as and when it becomes available.
At La Quadrature du Net, we refuse to let our social system be transformed into a gigantic real-time surveillance system. We refuse to allow the huge number of data held by social administrations, and initially collected to ensure their proper functioning, to be misused for social control purposes. We refuse to let the computerization of the world be synonymous with dehumanizing rationalization through the race for “efficiency”, the reduction of our lives to a handful of numbers, and the constant sorting and comparing of individuals.
We invite you to join us in the fight against the unlimited extension of police logic within our social administrations. We need help mapping the digital surveillance practices of our administrations, legal advice and your feedback!
How do welfare authorities keep tabs on us?
Join the fight against algorithmic social control!
Welfare
After a long legal battle based on FOIAs, we have obtained the source code of the algorithm used by CAF. The CAF has succeeded in combining dystopian practices with discrimination against the most disadvantaged. Its algorithm deliberately targets the disadvantaged and results in massive over-control of poor people, disabled or women raising a child alone.
Health
Using a gigantic database covering health data of more than 69 million people, the development of such practices within the CNAM is particularly worrying.
Read more
As early as 2010, it experimented with an algorithm for scoring low income beneficiaries. As early as 2014, it also set out to develop tools for detecting “atypical profiles” of healthcare professionals. Nurses and doctors are now considered “suspects” from whom the Assurance Maladie must protect itself.
Its digital tracking practices do not stop at the use of profiling algorithms. However, the exact extent of the development of algorithmic surveillance by the Assurance Maladie is still difficult to measure as its managers are opposed to the minimum of transparency, as various exchanges we’ve had with them attest.
We have initiated various legal actions and recourses to better document them. We will detail all our research on a dedicated page very soon.
Pension
While the development of profiling algorithms at CNAV seems to be a lower priority than at CNAF or CNAM, since 2016 CNAV has had an algorithm responsible for sorting out good and bad pensioners.
Read more
The CNAV is, however, at the forefront of other topics related to digital surveillance. It is particularly involved in the development of the use of facial recognition for remote verification of the “existence” of insured persons. This technique, authorized since July 2023, was pushed through the stigmatization of pensioners living in the Maghreb and suspected of continuing to receive their pension after their death…
We’ll be coming back to you in due course with any information we gather on the subject.
Unemployed
Read more
While Pôle Emploi has assured us that no algorithms are currently being used for job search control purposes – the algorithms would focus on detecting “scams” such as identity theft or the production of false documents – we are awaiting further documentation.
Ask your score!
Administrations are legally obliged to inform you of the “suspicion scores” they have allocated to you, along with explanations of how they are calculated.
We will put online soon sample emails to send to the various administrations. By sending these mails, you can help us both to document the (bad) practices of administrations and to put pressure on their leaders! In the meantime, you can contact us at algos@laquadrature.net !
Mapping algorithms
The information available is limited and scattered left and right. And the heads of the main government departments are doing everything in their power to oppose any request for information about them. That’s why we need your help. We will put online soon a guide to help you find your way around, and some ideas for action to help us document government surveillance practices.