Sixteen artificial intelligence companies make new security commitments


The logo of the ChatGPT application developed by OpenAI, November 23, 2023 in Frankfurt (AFP/Kirill KUDRYAVTSEV)

Sixteen of the world’s leading artificial intelligence (AI) companies, whose representatives met in Seoul on Tuesday, have made new commitments to ensure the safe development of this science, the British government announced.

“These commitments ensure that the world’s leading AI companies will be transparent and accountable for their plans to develop safe AI,” British Prime Minister Rishi Sunak said in a statement released by the ministry. British Institute of Science, Innovation and Technology.

The agreement, signed in particular by OpenAI (ChatGPT), Google DeepMind and Anthropic, builds on the consensus reached during the first global “summit” on AI security, last year in Bletchley Park, in the United Kingdom. United.

This second “summit” in Seoul is jointly organized by the South Korean and British governments.

AI companies that have not yet made public how they assess the security of the technologies they develop are committed to doing so.

– “Intolerable” risks –

This includes determining what risks are “deemed intolerable” and what companies will do to ensure that these thresholds are not crossed, the press release explains.

In the most extreme circumstances, companies also commit “not to develop or deploy a model or system” if mitigation measures do not keep risks below set thresholds.

These thresholds will be defined before the next “summit” on AI, in 2025 in France.

Among the companies that accept these security rules are also the American technology giants Microsoft, Amazon, IBM and Meta, the French Mistral AI and the Chinese Zhipu.ai.

The runaway success of ChatGPT shortly after its 2022 release sparked a rush in the generative AI field, with tech companies around the world investing billions of dollars into developing their own models.

Generative AI models can produce text, photos, audio, and even videos from simple prompts. Their supporters present them as a breakthrough that will improve the lives of citizens and businesses around the world.

But human rights defenders and governments also fear their misuse in a wide range of situations, including to manipulate voters through fake news or “deepfake” photos and videos of political leaders.

Many are demanding that international standards be established to govern the development and use of AI.

In addition to security, the Seoul “summit” will examine how governments can help drive innovation (including AI research in universities) and how the technology could help solve problems such as climate change and poverty.

The two-day Seoul meeting is being held partially virtually, with some sessions held behind closed doors while others are open to the public in the South Korean capital.

© 2024 AFP

Did you like this article ? Share it with your friends using the buttons below.


Twitter


Facebook


Linkedin


E-mail





Source link -85