AI has a 20% chance of destroying humanity, but we must continue to develop it according to Elon Musk


Elon Musk said that he believes artificial intelligence has a 1 in 5 chance of ending humanity as we know it. However, he believes that the risk is worth it and that we should not stop working on AI.

army of robots
Credits: 123RF

The meteoric rise ofartificial intelligence is very positive in certain areas such as medicine. It allows things that humans would not have been able to do, unless they spent many, many years there without interruption. But behind these beneficial advances lies the fear that AI will one day end up dominating us. It’s not an irrational fear born from too much exposure to science fiction films and novels. OpenAI, the originator of ChatGPT, already thinks it has gone too far in approaching AGI. L’general artificial intelligence able to function as a human brain would.

Many scientists have looked into what is called p(doom). That is to say the likelihood of AI taking over or ending humanity. It could go through creation of an unstoppable biological weaponTHE outbreak of nuclear waror even a massive cyber attack which would cause the collapse of society.

Elon Musk, who founded OpenAI with Sam Altman before stepping down in 2018, is “quite agree with Geoff Hinton that [le p(doom) est à] approximately 10 or 20% or something like That”. He refers to Geoffrey Hinton, Canadian researcher considered one of the pioneers of AI. That said, the billionaire believes that “LThe probable positive scenario outweighs the negative scenario” and calls not to slow down the development of technology.

Elon Musk calculated that AI had a 20% chance of ending humanity

The leader of You’re here And SpaceX does not explain how he made his calculation. That doesn’t stop Roman Yampolskiy, an AI security researcher and director of the Cybersecurity Lab at the University of Louisville from saying thatit is far from reality. And for good reason: the scientist speaks of a p(doom) of 99.999999%, Not less. According to him, the only solution to avoid ending up in a world where artificial intelligence is uncontrollable is simply to not create it.

Read also – To evolve, artificial intelligence needs a body according to Huawei

I’m not sure why he thinks it’s a good idea to pursue this technology anyway. If he worries that competitors will get there first, that doesn’t matter because uncontrolled superintelligence is bad in itself, no matter who creates it“, specifies Roman Yampolskiy. An opinion that some of his colleagues do not share at all. Another AI pioneer, Yann LeCunthus speaks of a risk less than 0.01% Insofar as humans are in control. He can choose to advance the AI ​​to a certain level, or stop before that.

Artificial intelligence must develop under control so as not to slip up

Elon Musk has a vision similar to that of Yann LeCun. The man thinks that by 2030, artificial intelligence will be greater than that of all humans combined. It would be the emergence of the famous AGI which scares OpenAI so much, but which the entrepreneur does not see as a threat as long as the man gives it a trajectory. “It’s almost like raising a child, but one who would be a super genius, a child with near-divine intelligence – and how you raise the child is important“, he explains.

Read also – AI: 8 million jobs threatened, this country faces an “apocalypse”

For him, there is one thing to pay attention to at all costs: under no circumstances should the AI ​​be capable of lyingeven if the truth is unpleasant“. Indeed, studies show thatonce AI is capable of lies, it will be impossible to turn it back with current security measures. Worse: it is possible that artificial intelligence learns to lie by itself, without us giving him the order. George Hinton thinks so too, estimating that an AI smarter than humans will be “very good at handling“, a behavior “that she will have learned from us“. Hopefully all these warnings are enough to convince researchers not to cross the red line.

Source: Business Insider



Source link -101