What would happen if a super-intelligent AI became malicious? OpenAI doesn’t want to know


What would happen if a super-intelligent AI, smarter than the smartest humans, ever turned malevolent? A new OpenAI team wants to make sure we never find out.

In an announcement this week, OpenAI said it was building a team to “direct and control AI systems much smarter than us.”

OpenAI goes on to claim that superintelligence, which could emerge in the next decade, would be the most important technology ever created and could help solve the world’s most important problems. But it also issues a rather stern warning: “It could also be very dangerous and lead to the disempowerment of humanity, or even to its extinction”.

Humans can control artificial intelligence because they are smarter

As it stands, the company says humans can control AI because they’re smarter. But what will happen when AI overtakes humans?

This is where the new team comes in, led by the current scientific director of the research laboratory. The team will consist of OpenAI’s top researchers and engineers, along with 20% of its current computing power.

The end goal is to create an AI system that achieves a set goal and does not deviate from established parameters. The team plans to achieve this in three steps:

  • Understand how AI analyzes other AIs without human interaction.
  • Use AI to find problem areas or exploits.
  • Deliberately training part of the AI ​​the wrong way to see if this is detected.

Humans use AI to train AI to keep super intelligent AI under control

In short, a team of humans from OpenAI are using AI to help train AI to keep super intelligent AI under control.

And they think they can achieve it within four years.

They admit it’s a lofty goal and success isn’t guaranteed, but they’re confident.


Source: “ZDNet.com”



Source link -97