The big bad social score that hides the forest


It is important to fight against the “social score”: this is one of the aspects put forward by Thierry Breton’s Europe to praise the merits of its new AI Act. However, this notion had already been ruled out for a long time, and has little to do, in itself, with artificial intelligence.

Numerama & France Culture

This article is taken from Numerama’s December 15 column in Brave New World, the France culture show which questions digital technology, of which Numerama is a partner.

The AI ​​Act is as difficult to pronounce as it is to understand. To simplify things, many have focused on a striking element of the text: the ban on social scoring, or ssocial scoring. This is the idea of ​​assigning an average rating to a human being based on their behavior: it can decrease or increase over time.

I’ll explain it to you, but in reality, since we’ve been bombarded with this for so long, I’m sure everyone has already heard about it. We owe it especially to this episode of Black Mirror broadcast in 2016 on Netflix, called Nosedivewhere all the protagonists rate each other.

The social score is the scarecrow that we brandish as the greatest possible algorithmic threat. It has become the consensual example by definition: who could defend such a concept? This is why Europe likes to seize it as soon as it can.

Already in October 2021, the European Parliament had positioned itself against “social rating”, considering that it was “contrary to fundamental rights and human dignity. » Even 6 months earlier, the European Commission had already classified social rating as an “unacceptable risk”, i.e. the highest risk for the dangers of AI.

Look at the social score so as not to see the rest

It is fashionable to point out the hypocrisy behind this outcry. Yes, it helps to anticipate the worst. But we also need to look at what is happening around us and perhaps sweep our own door. Citizens have already been rating each other for years.

There are Airbnb, Uber and Deliveroo, services that ask you to give between one and 5 stars to someone else’s work. Note which will determine his own conditions of exercise. You will tell me that I only cite private companies. This would be to ignore certain administrative foundations of our society.

Recently, Le Monde and the Quadrature du Net association revealed how the CAF algorithm worked. The Family Allowance Fund determines a “risk score” according to each profile. Basically, a score which would make it possible to predict whether the beneficiary is more likely to commit fraud. Surprise ! we discover that this disadvantages certain profiles, such as single or more precarious women.

But it doesn’t stop there: our entire education system is also based on ratings. At school, students earn good or bad grades, 1/10 of 18/20, sometimes leading them into tunnels from which it is then difficult to escape.

AI has a good back. We did not expect it to generate intrinsically biased mechanisms. Raising the specter of social score is simply pointing the finger at the tree that hides the forest.

Meilleurdesmondes_numerama2

Do you want to know everything about the mobility of tomorrow, from electric cars to e-bikes? Subscribe now to our Watt Else newsletter!



Source link -100