the boss of OpenAI warns of the dangers of AI and calls for more regulations


Sam Altman, CEO of OpenAI, says it himself: artificial intelligence represents a danger for our society. In a blog post, the latter calls on governments to set up a regulatory system for this booming industry, on the same model as what has been put in place for nuclear power.

Credit: 123rf

We no longer count the voices raised to denounce the inherent risks of AI for our society. It was less expected, however, that one of these alerts would come straight from the CEO of OpenAI, the hottest company in the industry right now. Indeed, the one behind the creation of ChatGPT is formal: it is imperative to quickly put in place a system to regulate this industry on a global scale.

“It is possible that within the next ten years, AIs will exceed the skill level of experts in most fields, and perform as many productive activities as one of today’s largest companies”, writes the latter in a blog post. According to him, what he calls “superintelligence” will soon be humanity’s most powerful technology.

One of the AI ​​leaders calls for more regulation on AI

Despite all the promises that this implies, we must not underestimate the risks that such a power generates, he underlines. Also, Sam Altman proposes three lines of thought to better regulate this sector. First, he considers that companies working on artificial intelligence must coordinate to develop working methods that ensure user safety.

He thus imagines a call for projects from governments, on which all these companies could work together, or even a common agreement to define the limits of the growth of AI not to be exceeded each year. “Companies should be responsible for maintaining a very high standard in terms of security”he adds.

Next, Sam Altman argues that AI needs a regulator similar to the International Atomic Energy Agency, whose role is to ensure the peaceful use of nuclear energy. “Any effort above a certain threshold of capacity […] will have to be submitted to an international authority that can inspect the systems, require audits, verify compliance with security standards, impose restrictions on the degrees of deployment and security levels, etc. “.

Related — ChatGPT: You’ll Never Believe How Much Chatbot Costs OpenAI Every Day

To promote his idea, the CEO of OpenAI indicates that companies could think together about how to work alongside this international agency, which would make it possible not to fall all of this responsibility on governments. Finally, the last point concerns the technical capabilities of companies to make AI secure. Sam Altman ensures that Open devotes a lot of resources to research in this area.

The latter nevertheless concludes by declaring that it is necessary to let companies develop their projects freely. “But the governance of the most powerful systems, as well as decisions about their deployment, must be subject to rigorous public scrutiny”he assures, before affirming that AI is a democratic question, on which the people also have a say.

“Current systems will create tremendous value in the world and, although they carry risks, the level of those risks seems commensurate with that of other internet technologies and the company’s likely approaches seem appropriate.”, he finishes.

Source: Open AI



Source link -101