Deepfakes can’t reproduce people in profile


Deepfakes are getting better and better, that’s for sure. But, there is one more thing that artificial intelligences cannot reproduce: people in profile.

There have been impressively realistic Tom Cruise deepfakes, and even a Thierry Ardisson show using technology to revive Dalida. Deepfakes, artificially engineered images that create videos impersonating other people, are now common. But while technology can be used for positive purposes, unfortunately, it is often used for nefarious purposes.

We saw such an example in March 2022, when a deepfake video impersonating Ukrainian President Volodymyr Zelensky, calling on soldiers to lay down their arms, circulated on the internet. More recently, it was the FBI that warned about the use of deepfakes by cybercriminals during job interviews.

Some deepfakes are crude, and some are noteworthy. But, according to researchers from Metaphysic, the company behind the fake Tom Cruise videos, artificial intelligences have a weakness: the representation of people in profile.

A deepfake by Sylvester Stallone, very real from the front, becomes absurd in profile. // Source: Metaphysics

Artificial intelligences do not have enough references

Metaphysic, in a publication of August 8, 2022 on its site, explains that artificial intelligences are still largely incapable of representing people being completely in profile, that is to say oriented at 90°.

There is an interesting vulnerability in deepfake videos that has, until now, been generally overlooked. write the Metaphysic researchers. The reason for this is simple, according to the authors: until recently, video conference calls using deepfakes were not a cause for concern.

How to explain this defect? There are two main explanations, according to Metaphysics. First, the artificial intelligences used to analyze and map faces know how to do it for people from the front, thanks to many landmarks. Only, ” most recognition algorithms find only 50 or 60% of these points on faces in profile “, notes the study. AI struggles to analyze faces, the more they are turned sideways.

Multi-view-Hourglass-Model
The artificial intelligences cannot identify the points of the face when it is in profile. // Source: Metaphysics

The other reason is simpler: Nobody outside of medical personnel, special effects experts, and law enforcement wants profile pictures “. Full-face photos and 3/4 photos are the ones most regularly found in databases, because ” profile photos are the least flattering, and the least expressive “.

So there are fewer photos for the AIs to train on — which further worsens the basic limitations of the AI.

If one day you go to a job interview and you think that someone you are talking to might use a deepfake, you know what you have to do: ask them to turn 90°.



Source link -100