Google, Amazon, Meta & co: Tech giants sign a super pact to protect the next elections from disinformation


Mathilde Rochefort

February 19, 2024 at 2:24 p.m.

1

Tech giants are committed to doing everything they can to prevent the spread of misinformation.  © Andy.LIU / Shutterstock

Tech giants are committed to doing everything they can to prevent the spread of misinformation. © Andy.LIU / Shutterstock

Meta, OpenAI, Google, Microsoft, IBM, Adobe, SnapChat, X.com, TikTok… Tech giants pledge to stand together against disinformation. Mainly in their sights are deepfake audio, video and images generated by artificial intelligence (AI).

It is urgent to take action. This year, large-scale elections will occur around the world, including Europe, the United States, India as well as Indonesia. More than 4 billion people in more than 40 countries are affected. At the same time, the number of deepfakes increased by 900% in just one year, according to data from machine learning company Clarity. The emergence of generative AI, popularized by ChatGPT, is of even greater concern to experts.

Eight commitments

Faced with this unprecedented threat, major technology companies have reached an agreement including eight high-level commitments. They include model risk assessment, research, detection and processing of the distribution of this content on their platforms as well as transparency for the public. The press release specifies that these commitments only apply “ when relevant to the services provided by each company “.

This pact comes as OpenAI has just presented its Sora AI, which allows you to create realistic videos from text. “ There are serious reasons to worry about how AI could be used to mislead voters in campaigns. It’s encouraging to see that some companies are coming to the table, but at the moment I don’t see enough details, and we will probably need legislation that sets clear standards », Comments Josh Becker, Democratic Senator from the State of California.

In January, deepfake audio of Joe Biden asking Democratic voters in New Hampshire not to vote in the primaries provided an example of the threat posed by this technology. Deepfakes can be used not only to influence a vote, but also to dissuade voters from going to the polls.

Screenshot of a video generated by OpenAI's Sora.  © Capture Clubic - OpenAI

Screenshot of a video generated by OpenAI’s Sora. © Capture Clubic – OpenAI

Many challenges to overcome

Several companies have already taken steps to prevent malicious use of their models. OpenAI and Meta embed a watermark on images generated by their AIs like Dall-E and Imagine. Midjourney plans, for its part, to prevent the creation of visuals featuring election candidates.

However, there are still many challenges to overcome. Watermarks can easily be bypassed, with a simple screenshot for example. Likewise, they are not currently deployed in videos and audios generated by artificial intelligence. For the moment, the companies involved are content to agree on a set of technical standards and detection mechanisms… And it is not certain that this will be sufficient to counter the problem in depth.

Source : CNBC



Source link -99