OpenAI has a problem: should it launch Sora, its video generative AI, before or after the American elections?


The American company OpenAI is asking itself the question of the right time to release Sora. There are technical considerations, but also political issues. A US election is underway, with an outcome in November 2024.

Should OpenAI refrain from launching Sora before the US elections, which will be played on November 5, 2024? This, in short, is the question asked to the American company specializing in generative artificial intelligence, during an interview with the Wall Street Journal given by Mira Murati, its technical director, on March 13.

Barring any drama, the vote will be a re-match between Donald Trump and Joe Biden. However, taking into account the personality of the leader of the Republicans, the way in which he is campaigning, his frustration at not having been re-elected due, he says, to a vast conspiracy against him, and to an electorate right-wing white hot, overflows are to be feared.

In this explosive cocktail, generative AI could be one more ingredient, likely to play a harmful role in this democratic meeting, by misleading voters. A problem that OpenAI is considering, according to Mira Murati. And this could well influence the release date of Sora.

Sora, the AI ​​that transforms text into video // Source: OpenAI
Sora, the AI ​​that transforms text into video. // Source: OpenAI

Sora is a generative AI prototype specialized in videos. It was presented in February and many demonstration videos were posted online. The result is stunning, but raises the same questions as other generative AI systems, on copyright and the future of certain creative professions.

Currently, the performance of the tool allows it to create clips that can last a minute with high image quality (1080p). Some renderings approach photorealism and scenes can be very varied, including complex camera movements and staging. But of course, there are also weak or faulty renderings.

Sora is set to make its public debut in 2024

At the Wall Street Journal, Mira Murati confirmed her company’s plans to open Sora to the public later this year. The tool unveiled last February would then become freely accessible, like its two other main products already available, ChatGPT (text generation) and Dall-E (image generation).

But, she warned, “ it could take a few months. » There is first a purely technical consideration. There is no question for OpenAI to release a tool lacking optimization: Sora must have the lightest possible electrical and computing footprint on the company’s infrastructure. Sora must be able to absorb massive use without tiring OpenAI.

Optimization work is underway, according to Mira Murati. Above all, there is then the timing of the election. With a campaign that will certainly gain momentum over the months, and due to all the abuses likely to occur, including that of disinformation, Sora’s release could suffer.

We will not publish anything of which we are not sure of the impact »

Mira Murati

You know, it’s definitely a consideration when it comes to issues of misinformation and harmful bias », recognized Mira Murati. As a result, she continued, “ we will not publish anything that we are unsure of will impact global elections or other issues “.

Tests underway to limit damage

In fact, it is impossible to try Sora today. Only a very small handful of hand-picked people have access to it, including members of the “red team”, responsible for pushing the video generative AI to its limits, in order to detect any operating problems. The goal is to correct them before the public opening.

This red team is used to find vulnerabilities and contributes to the implementation of safeguards so as not to generate anything. Clearly, Sora must be safe for work (SFW), that is to say that he maintains an attitude suitable for the whole family, in short. We must therefore track down biases, weaknesses and bad behavior.

Overall, Sora should take over most of the rules already existing at ChatGPT and Dall-E, in order to have homogeneity and consistency across all the tools designed by OpenAI. For example, there is no question of authorizing artificial videos involving public figures (like Trump or Biden, therefore).

Source: Numerama with MidjourneySource: Numerama with Midjourney
Images created with AI featuring Donald Trump. // Source: Numerama with Midjourney

Concerning nudity, the obvious danger is that of pornographic deepfake. The crudest content could be banned, but Mira Murati notes that there is also reflection on the possibility of retaining the right to create nudity for artistic purposes. The company works with artists and this point is part of their discussions.

OpenAI’s hesitation is reminiscent of Midjourney’s. At the start of the year, his boss explained that he was thinking about banning the possibility of creating political images by AI – in particular during the American presidential campaign. Filters already exist, which prevent the creation of imaginary visuals about Trump or Biden.

Thinking about the right launch window for Sora, however, suffers from certain limitations.

First, it obscures the fact that there is already fake news of a political nature circulating on the internet. Then, the issue of generative AI specialized in video is not limited to Sora. There are and will be other tools, more or less manageable. And above all, the challenge will not be resolved even after a post-November 5 release. What about future elections?


Subscribe to Numerama on Google News so you don’t miss any news!



Source link -100