OpenAI presents Sora, its video generative AI. And the result is breathtaking!


Samir Rahmoune

February 15, 2024 at 10:04 p.m.

12

The start of an animation generated by Sora © Capture Clubic - OpenAI

The start of an animation generated by Sora © Capture Clubic – OpenAI

This is an exceptional novelty that OpenAI is giving us a taste of. Because after its text generative AI ChatGPT and its Dall-E image generative AI, with Sora, the Californian firm offers a video tool!

Even though the progress of generative AI was very impressive last year, it still seemed that there remained a difficult hurdle to overcome in the field: video. Indeed, current artificial intelligence did not seem able to produce a correct result in this format, as shown in the very famous video of Will Smith eating spaghetti. And yet, OpenAI surprises us once again by presenting Sora to us today!

60 second videos can be created

“Sky is the limit!” » Rarely has this phrase been as applicable as to OpenAI. The company led by Sam Altman has just presented a major new achievement with its AI Sora, capable of generating… videos.

A so-called AI text-to-video » where, as for ChatGPT, you just have to enter the description of what is desired to obtain a result. And as you can see below, the result is more than convincing!

According to OpenAI, the tool remains limited to brief productions for the moment. “ Sora can create videos up to 60 seconds long featuring highly detailed scenes, complex camera movements, and multiple characters with vibrant emotions », It is indicated in the presentation post on X.

OpenAI in the middle of intensive testing of Sora

But even as is, the performance is of a very high level. With this time limit, it seems that it is already possible to create trailer-type videos using Sora. Which OpenAI hastened to do, with a result that you can see again below.

Obviously, the realism of these productions raises even more acutely the question that we have been asking ourselves for a year and the emergence of AI: doesn’t Sora pose a risk? Because in the wrong hands, it could become very easy for anyone to produce fake videos trying to pass themselves off as real, with the potentially devastating consequences that we imagine, particularly at the political level.

For this reason, OpenAI continues the testing phase of the tool. “ We work with “red teamers” – experts in areas such as misinformation, hateful content and bias – who test the model in adversarial ways » indicates the company. With potential public marketing at the end of the process?

Source : OpenAI on X



Source link -99