[ad_1]
In an interview given after winning the Nobel Prize in physics, Geoffrey Hilton, one of the pioneers of AI, warns of its potential dangers. Artificial intelligence could slip out of our control sooner than we think.

L’artificial intelligence is it a benefit or a danger for humanity? Don’t take out a paper and a pen, this is not the next subject in the philosophy baccalaureate. The question is still very important. Whether we like it or not, AI is part of our daily lives and it’s not going to disappear anytime soon. Its very real impact does not only make people happy, particularly in terms of employment where many employees fear being replaced by autonomous robots.
One person is particularly aware of all this, and for good reason: he is considered one of the AI pioneers. It is about Geoffrey Hiltonwho has just received the Nobel Prize in Physics with John J. Hopfield for their work on neural networks. During an interview, the winner takes the opportunity to share his vision of artificial intelligence and everything is not rosy.
AI is evolving rapidly and it is not without risk for one of its founding fathers
Man fears only one thing: that AI is slipping away from us. “What worries me is that [l’évolution de l’IA] can also lead to bad things, especially when we get things smarter than ourselves. No one really knows if we will be able to control them“.
Geoffrey Hilton estimates that AI has a 10-20% chance of destroying humanity. A percentage with which Elon Musk agrees for example, but that should not prevent us from continuing to develop and improve models.
Read also – This AI escapes the control of researchers by rewriting its own code to extend its capabilities
Let’s also not forget that according to employees ofOpenAIthe company behind ChatGPTwe are already on the verge of creating general artificial intelligence, that is to say functioning like a human brain.
Geoffrey Hilton also took advantage of the interview to tackle Sam Altmanat the head of OpenAI. He recalls that originally, the company was founded with the aim of create general artificial intelligence while ensuring that it is safe. But according to the professor, this notion has faded into the background over time and the focus was on profitswhich he deplores.
Ways must be found to prevent AI taking control, warns Geoffrey Hilton
Asked about how to prevent risks, the scientist specifies that for him, the danger does not come from the use of AI by individualsbut of its upstream development. A point that must be taken into consideration today. Hilton thinks that within 5 to 20 years, AI systems will surpass human intelligence.
And to the extent that “there are very few examples where more intelligent things are controlled by less intelligent things“, a world dominated by machines is no longer just science fiction.
Also Read – AI can learn by thinking on its own, just like humans
In line with this observation, he invites governments to encourage companies specializing in AI on one point: invest more in the security of AI models. Implied, to protect humans from these. Currently, the majority of funds are spent on improving the models. The Nobel Prize winner believes that the financial effort in AI security “must exceed 1%. It must reach something like a third [des dépenses totales]”.
But there is no question of giving in to fatalism. Geoffrey Hilton is well aware of the benefits of artificial intelligence, and he hopes that it “will bring significant benefits, increase productivity and improve everyone’s lives“. This is already the case in the field of medicinewhere AI can detect many diseases very effectively, without resorting to invasive methods.
[ad_2]
Source link -101