AI leaders warn… about the dangers of AI


In March, an open letter from tech industry experts already sought to halt the development of advanced AI models, fearing the technology posed a “profound risk to society and humanity”. And here is a statement co-signed by OpenAI CEO Sam Altman, AI “godfather” Geoffrey Hinton, and others, aimed at reducing the risk of AI destroying humanity. The preface to the statement encourages industry leaders to openly discuss AI’s most serious threats.

According to the statement, the risk AI poses to humanity is so severe that it is comparable to global pandemics and nuclear wars. The other co-signers are Google DeepMind researchers, Microsoft CTO Kevin Scott and internet security pioneer Bruce Schneier.

Today’s large language models (LLMs), popularized by ChatGPT, cannot yet achieve artificial general intelligence (AGI). However, industry leaders are concerned that LLMs will progress to this point. AGI is a concept that defines an artificially intelligent being capable of matching or surpassing human intelligence.

The development of the AGI could have important consequences

AGI is an achievement that OpenAI, Google DeepMind, and Anthropic hope to one day achieve. But every company recognizes that the development of AGI could have significant consequences.

In testimony before the US Congress earlier this month, Altman said his greatest fear was that AI would “cause significant harm to the world,” and that harm could occur in multiple ways.

A few weeks earlier, Mr. Hinton had abruptly resigned from his job at Google, where he worked on neural networks, telling CNN that he was “just a scientist who suddenly realized that these things were getting smarter than us”.

To go further on this subject


Source: “ZDNet.com”



Source link -97