“OpenAI bears a huge responsibility on behalf of all humanity”


Are big tech companies taking artificial intelligence too lightly? This is what a former OpenAI executive who has just resigned seems to indicate.

What a week for OpenAI! On Monday, the start-up specializing in artificial intelligence unveiled its new multimodal model, GPT-4o. The next day, Google, its main competitor, unveiled similar technology. On Wednesday, we learned of the departure of Ilya Sutskever, one of the company’s co-founders, and Jan Leike, a high-ranking executive. And it is the latter which raised many questions this Friday on the X.com platform.

A departure motivated by values

At OpenAI, the two men were at the head of Superalignment, a team specializing in the study of the risks of artificial intelligence formed in July 2023. OpenAI then guaranteed to allocate 20% of its computing resources to this team out of the four coming years.

If Ilya Sutskever remained rather warm in his departure message, Jan Leike expresses more bitterness in his public thread on Elon Musk’s social network. If he begins by expressing his affection for the different OpenAI teams, he quickly specifies that he is “ at odds with OpenAI management over core company priorities“. The reason: the company’s too little interest in security, confidentiality, and the societal impact of artificial intelligence. He regrets not having had access to enough computing power in recent months to carry out his research on the subject.

Towards super intelligence

At present, we can wonder what the real risks of artificial intelligence may be. At the moment, the chatbots that make the headlines tend to regurgitate information learned in a given corpus without much thought, sometimes with gross errors that we call “hallucinations“. But at the speed at which this is progressing, we can expect in the coming years (within a decade according to OpenAI) the arrival of what researchers call a “super intelligence“, more intelligent than humans and capable of solving problems that we face today. And according to Ilya and Jan, “super intelligence could also be very dangerous and lead to the neutralization of humanity, or even its extinction“.

Without even going to the end of a lifting of the machines at theTerminator, it is important to weigh all the potential consequences of such a development. We are already seeing this today with questions about online disinformation, for example.

The responsibilities of OpenAI (and others)

As a leading figure of these advances, both OpenAI and Google have a heavy responsibility on their shoulders.in the name of all humanity», expresses Jan Leike. However, he deplores that “safety culture and processes have taken a back seat to flashy products“.

The problem actually turns out to be economic and commercial. In a globalized capitalist world, AI is more than a tool, it is a marketing product. And in such a market, the first-mover bonus is often decisive. It is therefore difficult for a start-up like OpenAI to dedicate time and resources to anticipate and secure general dangers at the risk of being stolen by Google (it was less than one this week), Anthropic (the company financed by Amazon behind the AI ​​Claude) or a newcomer, from China for example.

For Jan Leike, “OpenAI must become a security-focused company“. In Europe, the AI ​​Act should regulate certain abuses, but there is no doubt that the company that focuses on the product without worrying about these considerations will have the opportunity to get a head start on all its competitors.






Source link -102