OpenAI tries to reassure the general public about the risks of AI with new security procedures


Samir Rahmoune

December 20, 2023 at 8:02 p.m.

6

OpenAI GPT © T. Schneider / Shutterstock.com

A smartphone displaying ChatGPT in front of the OpenAI logo © T. Schneider / Shutterstock.com

OpenAI has implemented several new measures intended to reduce the risks created by the development of powerful artificial intelligence.

The year 2023 began with ChatGPT, which amazed the planet with its exceptional capabilities. It ends with more concerns about this technology, especially since the sector is led by OpenAI, a start-up which experienced great turbulence with the real false start of its boss Sam Altman, all against a backdrop of fear over the risks created by AI. The company has since sought to reassure, with new measures.

Management may be challenged by the Board of Directors

During the Sam Altman psychodrama, the public learned that his company had already reached a level of technological development which allowed it to have a potentially extremely dangerous AI. This was apparently one of the reasons which pushed the chief scientist Ilya Sutskever to then lead his slingshot.

OpenAI has since learned its lesson, and wants to avoid this kind of problem in the future. This is surely why the Californian firm produced new guidelines for the company earlier this week. According to these, the Board of Directors will be able to block the deployment of an AI that it considers risky, even if management claims for its part that it is safe. A way to add a new safeguard.

OpenAI © © Vitor Miranda / Shutterstock

© Vitor Miranda / Shutterstock

OpenAI will continuously assess potential risks

But the measures that have been constructed do not stop at better management oversight. A team led by Aleksander Madry will be tasked with continually assessing the risks posed by the language models developed, in four major categories, including cybersecurity or chemical, bacteriological and nuclear threats.

The team will be on alert for risks considered potentially “ catastrophic. » A name which applies « at any risk that could result in hundreds of billions of dollars in economic damage or lead to serious harm or death of many people. »

This same team will also send a monthly report to detail the results of the research carried out. Will this be enough to reassure those in power, more and more of whom want to regulate technology?

Source : Bloomberg



Source link -99