Are you wary of deepfakes? Here’s how to spot them


Cybercriminals are not lacking in resources. Recently, it was discovered that the latter were using deepfake technology to impersonate during online job interviews.

“Deepfakes,” which are AI synthetically recreating audio, visual, and video content of human beings, have been a potential identity theft threat for several years.

To combat these well-rehearsed scams, security researchers have found a simple way to spot if you’re dealing with a deepfake: just ask your interlocutor to turn their face to the side.

The bad profile of deepfakes

Why on the side? It’s simple, AI models for deepfake are very good at recreating a person’s face from the front. But for profile views, it’s still complicated.

Metaphysics.ai has exposed the choppy side of recreating 90° profile views in live deepfake videos, making side profile verification a simple and effective authentication procedure for companies conducting job interviews. online by videoconference.

Job interview intruders

Last June, the FBI warned that more and more deepfakes were being used during job interviews taking place by videoconference – a recruitment process that has become widespread during the Covid-19 pandemic.

Specifically, targeted interviews were those aimed at recruiting IT and technology personnel. They could thus access the databases of companies, but also the private data of their customers or even exclusive information.

At the time, the FBI encouraged participants in a videoconference to identify inconsistencies between the video and the sound – coughing, sneezing, other noises – in order to know if they were in the presence of an impostor. But side profile verification might be a faster and easier way to verify before starting any video meeting.

Lack of profile data

Writing for Metaphsyics.ai, Martin Anderson detailed the company’s experiences: Most created deepfakes glaringly fail when the synthetic head reaches 90° and reveals elements of the actual profile of the person using it. Recreating a profile usually fails due to a lack of good quality training data on the profile, causing the deepfake model to invent or draw a lot of things to make up for the lack.

Part of the problem is that deepfake software must detect landmarks on a person’s face to recreate a face. When facing the side, the algorithms have only half the landmarks available for detection compared to the front view.

Martin Anderson notes that the main weaknesses of using profile video to recreate a face are the limitations of 2D-based facial alignment algorithms and the lack of profile data for most people except Hollywood stars.

The hunt for deepfakes

Arguing in favor of using the side profile to authenticate participants in a videoconference, Martin Anderson points out that there will likely be a continuing shortage of training data for ordinary people. There are few requests for profile photos because they are not flattering, nor are photographers motivated to provide them since they offer little emotional information about a face.

“This dearth of available data makes it difficult to obtain a sufficiently diverse and extensive range of profile images of non-famous people to train a deepfake model to replicate a profile convincingly,” says Martin Anderson.

“This weakness in deepfakes offers a potential way to uncover ‘mock’ callers in video meetings, a risk recently classified as emerging by the FBI: If you suspect your caller may be a ‘deepfake clone’, you can ask to turn sideways for more than a second or two, and see if you’re still convinced by his appearance. »

9 out of 10 biometric systems vulnerable

Sensity, which works on identity verification and deepfake detection, reported in May that it found that 90% of biometric verification systems widely adopted and used in financial services for KYC (Know Your Customer) compliance were severely vulnerable to “face swap” attacks by deepfake.

The company adds that commonly used liveness tests, which involve a person looking into a connected device’s camera, are also easy to fool by deepfakes. These tests require the person to move their head from side to side and smile. The deepfakes used by Sensity did indeed move their heads side to side, although the video shows that they stop rotating their heads before reaching 90°.

Giorgio Patrini, CEO and Chief Scientist of Sensity, confirmed to Metaphysic.ai that he did not use 90° profile views in the tests.

How to spot imposters

Deepfakes therefore always have difficulty in reproducing a person’s profile. If you have any doubts or need to verify the identity of your interlocutor, all you have to do is ask him to turn his head 90° for a few seconds.

But this is not the only technique. You can also ask the participant to wave their hands in front of their face. This movement disrupts the model and reveals the latency and quality issues of the overlay on the deepfake face.

Source: ZDNet.com





Source link -97