OpenAI assembles a team of experts to fight against the “catastrophic” risks of AI


Image: Yuichiro Chino/Getty Images.

Artificial intelligence continues to revolutionize the way we interact with technology, and there is no denying that it will have an incredible impact on our future. It is also undeniable that AI poses some pretty serious risks if left unchecked.

This is where a new team of experts brought together by OpenAI comes in.

Monitor AI work

Designed to help combat what it calls “catastrophic” risks, OpenAI’s team of experts – named Preparedness – plans to evaluate current and future AI models based on several risk factors. In particular, it will monitor individualized persuasion (i.e. content adapted to what the recipient wants to hear), global cybersecurity, autonomous replication and adaptation (an AI that modifies itself) and even threats of extinction such as chemical, biological, radiological and nuclear attacks.

You may be thinking that a nuclear war started by an AI is a bit far-fetched. But remember that earlier this year, a group of top AI researchers, engineers, and CEOs, including Demis Hassabis, CEO of Google DeepMind, issued a dire warning: “ Mitigating the risk of extinction through AI should be a global priority, along with societal risks like pandemics and nuclear war. »

How could AI cause nuclear war?

Today, computers are ubiquitous, helping to determine when, where and how military strikes take place. AI, if not already involved, will be in the future. But this technology, in addition to being prone to hallucinations, does not have the philosophy of a human being to make these kinds of decisions. In short, AI could decide that it is time for a nuclear strike when a human would think that is not the case.

“Cutting-edge AI models that exceed the capabilities that currently exist in the most advanced models will have the potential to benefit all of humanity. But they will also pose greater and greater risks,” warns OpenAI.

The role of Preparedness

To help control artificial intelligence, the OpenAI team will focus on three main questions:

  • If misused, how dangerous are the cutting-edge AI systems we have today, and those that will be created in the future?
  • If cutting-edge AI models were hijacked, what exactly could a malicious actor do with them?
  • How can we build a framework to monitor, assess, predict and protect against the dangerous capabilities of cutting-edge AI systems?

This team is led by Aleksander Madry, director of the MIT Center for Deployable Machine Learning and co-director of the MIT AI Policy Forum.

To expand its research, OpenAI also launched the “AI Preparedness Challenge” to prevent abuses that could lead to disasters. The company is offering up to $25,000 in API credits to 10 of the best submissions that publish probable, but potentially catastrophic, misuse of OpenAI’s technologies.

Source: ZDNet.com



Source link -97