The end of the world ? AI would be as potentially destructive as nuclear war, for some tech players


Vincent Mannessier

May 31, 2023 at 2:10 p.m.

8

nuclear explosion planet Earth © Paopano / Shutterstock

© Paopano / Shutterstock

Reducing the risk of extinction from AI should be a global priority, along with societal risks like pandemics and nuclear war. »

This pithy sentence is the entire statement posted on the site of the Center for AI Safety, an organization whose goal is to reduce the risks posed to society by the development of AI. Far from being the first of its kind, its originality is to be found among its signatories. There are the leaders of Open AI, the DeepMind division of Google, or even recognized researchers in the field. Hopefully they find those responsible for all of this.

The existential risks of AI, a real probability

This is not the first time that players primarily concerned with the development of artificial intelligence have expressed their concerns on the subject. In March, a letter signed by more than 1,000 of them called for a 6-month break in the development of the technology. If the risks mentioned are largely credible, the pause in the development of AI seems like wishful thinking, as it has been the subject of a frantic race since it left the laboratories.

The probability that a serious, even cataclysmic event will be caused in the future by artificial intelligence is absolutely not science fiction. The risks, both considered (deepfakes, manipulation of elections, loss of control of weapon systems, etc.) and still unknown, are taken very seriously by researchers and other actors in the subject. To the point of developing the concept of P(doom), which symbolizes the probability estimated by everyone that AI will cause an extinction of humanity or another irreversible global catastrophe. Rare are those who place their P(doom) at less than 10%…

Sam Altman © © TechCrunch

Sam Altman, CEO. from OpenAI © TechCrunch

The cognitive dissonance of industry leaders

Only, it is permissible to have doubts, or at least to wonder about the honesty or the motivations of some of these actors who, let us remember, literally live from the development of this technology. The OpenAI lab, led by Sam Altman, was created in 2015 specifically to counter the excesses of “bad” AI, and was non-profit. This last point was quickly dropped as of 2019. As for the first… that remains to be seen.

The very character of Sam Altman is complex. While it is true that he does nothing to minimize the risks posed by his baby ChatGPT to the public, even going so far as to call for regulation, some criticize his proposals in this area, which would only aim to set in stone an established situation, in which he plays a leading role. It’s also hard to tell right from wrong when he explains being terrified of what AI is capable of doing to the world, only to tout all the reasons it will make it a better place in the next sentence.

Finally, it is perhaps worth remembering that, as he himself explained in an interview, Altman is a survivalist. This means that he has already prepared (weapons and gas masks, among others) everything that will be necessary for his survival when the hour of the Apocalypse has come.

Source : Engadget, Futuristic



Source link -99