“AI opens up new surveillance possibilities”

HAShile the giants of artificial intelligence (AI) promise ever more sophisticated virtual assistants, one of the risks posed by the expansion of this technology is little highlighted: AI also opens doors for surveillance. If digital technology already makes it possible to analyze and detect human behavior for police, professional or commercial purposes, recent advances in text, sound and image processing could extend these capabilities.

The most obvious application is “intelligent” video surveillance, which proposes using AI to analyze the content of camera images, in particular to identify people using facial recognition – a technique quite widespread in China. The recent AI Act regulation prohibited, in the European Union, such “real-time remote biometric identification in publicly accessible spaces”. While authorizing its use by law enforcement, under certain conditions, for “the prevention of real, current or foreseeable threats, such as terrorist attacks and the search for persons suspected of the most serious crimes”. Use a posteriori is possible, under conditions.

The NGO Amnesty International regretted “missed opportunity” of a total ban. The La Quadrature du Net association deplores the first tests in France of algorithmic video surveillance, authorized by the law on the Olympic Games adopted in March 2023: without facial recognition, these aimed to detect movements deemed problematic, in public transport, during a concert or a football match. The association fears a breakthrough “in small steps”. For its supporters, AI removes a major obstacle to video surveillance: the difficulty of having very large quantities of images analyzed by humans. For its detractors, it is liberticidal but also a source of errors and bias.

Read also | Algorithmic video surveillance: tests in the Paris region during Taylor Swift concerts

The possibilities don’t stop at these hotly debated areas. Seemingly innocuous, one of the features presented on May 14 by Google caused privacy defenders to react: it offers to send an alert to the user of an Android phone if they receive a sensitive call, according to the AI, to be a scam. “It’s incredibly dangerous”, tweeted Meredith Whittaker, general manager of encrypted messaging Signal. According to her, scanning telephone conversations could, ultimately, be used by States to detect all types of behavior deemed illegal. To reassure, Google recalled that the conversations were analyzed on the user’s phone and that the latter had to consent – “opt in” – to use this service.

You have 35.14% of this article left to read. The rest is reserved for subscribers.


source site-30