“A global regulation of artificial intelligence, co-written by technology players and States, must see the light of day”

AT what are they playing? For three months, some experts in artificial intelligence (AI) have been increasing anxiety-provoking outings. In March, Elon Musk and a hundred experts called for a “moratorium” on the development of generative AIs on the grounds that “we are not certain that their effects are positive and their risks manageable”.

A few days ago, three hundred and fifty experts, including Sam Altman and Demis Hassabis, head of Google DeepMind, published an open letter condensed into twenty-nine words: “Limiting the risk of AI-related extinction should be a global priority, alongside other society-wide risks, including pandemics and nuclear war. » A hand-sewn shock formula to polarize attention and make an impression.

Admittedly, the leaps made by artificial intelligence are shaking certain foundations of our liberal societies. Starting with the relationship to truth and authenticity. However, we will not be able to protect our base of values ​​by stirring up fears. At best, we will succeed in giving life to a new form of Luddism – this English social movement of the XIXe century, made up of weavers, craftsmen and textile workers breaking the first industrial weaving machines in the Midlands.

Instead of behaving like blasters, tech players should instead direct their energy towards three fronts: supporting companies, reassuring citizens and supporting public authorities.

A protection imperative

The brouhaha around generative artificial intelligences operates like a decoy. In fact, leaders are still struggling with “traditional” AI. A study conducted by the OpinionWay institute published in May reveals that only 41% of mid-sized French companies use AI.

Read also: Article reserved for our subscribers “The race for artificial intelligence and the race for the metaverse are both off to a false start”

A surprisingly low figure, which the bosses concerned explained mainly by the lack of internal skills, the absence of consensus and the risks relating to data confidentiality. That is two out of three obstacles relating to trust. A confidence that the catastrophic outbursts of recent weeks will not restore. But make no mistake: there can be no acceptance of generative artificial intelligence without widespread adoption of artificial intelligence.

The citizens, for their part, evolve in the midst of a dystopia. The new models of artificial intelligence are preparing to eliminate their use (300 million, according to a report by Goldman Sachs), or even to cause the extinction of humanity, according to the open letter of the three hundred and fifty experts.

You have 41.03% of this article left to read. The following is for subscribers only.

source site-30