AI giants challenged to manage constantly developing technology


Ministerial participants at the second global “summit” on artificial intelligence (AI) in Seoul, May 22, 2024 (AFP/ANTHONY WALLACE)

The second global “summit” on artificial intelligence (AI) concluded on Wednesday in Seoul with a collective commitment to managing the dangers of this technology, but the crazy speed with which it is developing promises them many difficulties.

During this event organized jointly by South Korea and the United Kingdom, sector leaders – from South Korean Samsung Electronics to the American Google via OpenAI, the creator of ChatGPT – codified their commitments in a document titled “Seoul AI Business Pledge”.

At the same time, more than a dozen countries, including the United States and France, have agreed to work together against the threats posed by advanced AI, including “serious risks”, according to a joint press release from these countries.

These risks could include an AI system helping “non-state actors advance the development, production, acquisition or use of chemical or biological weapons”, or being able to “evade human oversight, including through circumvention of protective measures, manipulation and deception, or autonomous replication and adaptation,” according to this press release.

The day before, sixteen of the biggest players in the sector had already signed an agreement to guarantee the security of AI, building on the consensus reached during the first global “summit” on the subject, in 2023 in Bletchley Park (UK). -United).

In particular, they promised to define the risks “deemed intolerable”, and what companies will do to prevent them. The signatories also committed to “not developing or deploying a model or system” whose risks would prove too difficult to control.

– Get in tune –

But experts say it is difficult for regulators to understand and manage AI, given the lightning speed with which it is developing.

“I think it’s a very, very big problem,” warns Markus Anderljung of the Center for AI Governance, a research organization based in Oxford, UK.

Markus Anderljung, from the Center for AI Governance, a British research organization, in Seoul on May 22, 2024

Markus Anderljung, from the Center for AI Governance, a British research organization, in Seoul on May 22, 2024 (AFP/ANTHONY WALLACE)

“AI will be one of the biggest challenges that governments around the world will face over the next two decades,” predicts this expert. “The world will need to develop some kind of common understanding of the risks associated with the most advanced general models.”

For Michelle Donelan, the British Secretary of State for Science, Innovation and Technology, “as the pace of development of AI accelerates, we must get in tune (…) if we want to control the risks.

At the next AI “summit” on February 10-11, 2025 in France, there will be more opportunities to “push the boundaries” in terms of testing and evaluating new technologies, predicts Ms. Donelan.

“At the same time, we must focus our attention on mitigating risks outside of these models, ensuring that society as a whole becomes resilient to the dangers posed by AI,” adds the Secretary of State.

The runaway success of ChatGPT shortly after its 2022 release sparked a rush in the generative AI field, with tech companies around the world investing billions of dollars into developing their own models.

Generative AI models can produce text, photos, audio, and even videos from simple command prompts. Their supporters present them as a breakthrough that will improve the lives of citizens and businesses around the world.

But human rights defenders and governments also fear their misuse in a wide range of situations, including to manipulate voters through fake news or “deepfake” photos and videos of political leaders.

Many are demanding that international standards be established to govern the development and use of AI.

Rumman Chowdhury, an AI ethics expert who heads independent organization Humane Intelligence, in Seoul on May 22, 2024

Rumman Chowdhury, an AI ethics expert who heads Humane Intelligence, an independent organization, in Seoul on May 22, 2024 (AFP/ANTHONY WALLACE)

“More and more, we realize that global cooperation is necessary to really think about the problems and harmful effects of artificial intelligence. AI knows no borders,” says Rumman Chowdhury, an ethics expert at the AI who runs Humane Intelligence, an independent organization that evaluates AI models.

According to her, the danger comes not only from the “rampant AI” of science fiction nightmares, but also from the inequality faced by this technology, while AI is developed by “a very, very small number of people and organizations” who reap the benefits.

As for people in developing countries, like India, “they’re often the ones doing the cleaning. They’re the data annotators, the content moderators. They clean the floor so everyone else can walk on virgin ground “, regrets Ms. Chowdhury.

© 2024 AFP

Did you like this article ? Share it with your friends using the buttons below.


Twitter


Facebook


Linkedin


E-mail





Source link -85