Adopted for the 2024 Paris Olympics, algorithmic surveillance has never been proven

In January 24, the Senate adopted, after heated discussions, article 7 of the bill on the Olympic Games, which authorizes on an experimental basis the deployment of cameras coupled with algorithmic detection systems. Tools capable, according to their promoters, of detecting crowd movements, abandoned luggage or suspicious behavior. The heart of the debate focused, and this is quite normal, on the major risks that the trivialization of surveillance technologies poses to privacy. But another element, however crucial, has been little discussed: the effectiveness of these tools presented as “intelligent”.

Read also: Article reserved for our subscribers The Senate adopts by a large majority the bill on the Olympic Games with numerous security measures

Experimentation: the term suggests a supervised, time-limited, scientific application. A life-size test, the results of which would be scrutinized in complete transparency by experts, to determine whether the technology is up to date, useful, respectful of privacy and of the allocated budget.

In practice, the decade of “experiments” – already – in augmented video surveillance shows that it is systematically the opposite that occurs. In 2016, the SNCF is testing “intelligent” cameras to detect attacks. No results of the experiment will ever be communicated.

In 2019, the town hall of Nice claims to have carried out tests of facial recognition cameras which succeeded in 100% of the identification tests. Six months later, the National Commission for Computing and Liberties strongly criticizes this announced “success”, the details of which have not been made public, which does not allow, according to the institution, having “an objective vision of this experiment [ni] an opinion on its effectiveness. Since then, the city has turned to another technology, without facial recognition.

In 2020, the RATP “experiments” for a few months the automatic detection of the wearing of a mask in the metro. She explains today to World not having followed up, due to a “average detection rate of 89%” who remained “lower than observations made in the field”.

Promises of an ultra-efficient tool

Abroad, where large-scale tests have been conducted in the United States and the United Kingdom, more detailed data has sometimes been revealed. They draw up a very unconvincing assessment of the usefulness of these technologies. In 2017, a face detection experiment at the Notting Hill Carnival in London ended in almost total failure, with very many “false positives” – people who were wrongly identified. In 2021, a government audit in Utah, USA, issued a report extremely critical of a ‘smart’ CCTV device purchased by the state’s police force from the Banjo company two years earlier. .

You have 52.61% of this article left to read. The following is for subscribers only.

source site-28