Intel’s Deepfake Detector is 96% Accurate…For Now

Home AI Projects Intel’s Deepfake Detector is 96% Accurate…For Now
ai

With the current rise of artificial intelligence, deepfakes – those artificial intelligence-based digital tricks that replace a celebrity’s voice or face in a video or audio clip – represent a legitimate cause for concern; they are now so convincing that often the average human being is not even able to tell the difference (see our article).

 

This situation has led some big names in artificial intelligence to develop specialized programs for detecting these deepfakes. Today, it is Intel’s turn to present its advances in this area; the American firm has just declared that its program called FakeCatcher was able to identify almost all video deepfakes, and this in real time – a considerable evolution compared to other current systems.

 

If it is so efficient, it is because it is based on a method quite different from traditional detection algorithms. Generally, the latter dissects the raw data that makes up the video, without really being interested in the succession of images strictly speaking. FakeCatcher, on the other hand, uses an approach based on photoplethysmography.

 

A system based on blood circulation

 

This convoluted term refers to a method of non-invasive vascular exploration. It involves illuminating living tissue and then measuring the amount of light reflected or absorbed by blood vessels and their contents. Information about blood circulation can then be derived.

 

It is thanks to this technique that certain devices, such as connected watches, are able to determine your heart rate. Intel researchers, for their part, had the idea of using this concept as part of the hunt for deepfakes.

 

Using very elaborate algorithmic tools, it is possible to spot tiny variations in hue, undetectable to the human eye, which are directly related to the way blood circulates in the face. Put end to end, all these differences thus constitute a sort of single sign-on. The latter can be synthesized in the form of a spatio-temporal map, which describes the changes in hue as a function of time.

 

However, current deepfake algorithms are currently unable to reproduce these excessively subtle changes. When we try to use this approach on a faked video, we get a map that has inconsistencies that are certainly discreet, but very easy to identify for a specialized program based on artificial intelligence. In practice, this map is therefore a guarantee of authenticity which makes it possible to identify deepfakes with excellent precision.

 

A successful program… for now

 

Intel troops claim that this system works 96% of the time. And what makes this number even more impressive is that contrary to what intuition suggests, it is excessively fast; FakeCatcher needs just a few milliseconds to identify a deepfake. We can therefore speak of detection in real time, which is a great first in this area according to the firm. This operating speed allowed Intel to host its program on a standard server accessible from the web. An important consideration when it comes to using a system of this kind in real conditions.

 

Its main limitation is that it exploits the relative lack of precision of current deepfakes. The better the algorithms that create them become, the more subtle changes – like those famous variations in hue associated with blood circulation – will be integrated into fake videos. This approach will therefore not work indefinitely; we will have to continue to find new ways to differentiate the true from the false… until the deepfakes progress again. And so on.

 

It’s a dynamic like that between hackers and cybersecurity researchers; we are witnessing a kind of eternal game of cat and mouse between a technology which progresses at high speed, and humans who try to prevent the associated abuses. It will therefore be advisable to keep an eye on these balances of power; because the dangerousness of deepfakes depends directly on the performance of these detection systems.

 

allix