Preventing deepfakes must be a top priority for law enforcement, says Europol

The Europol Innovation Lab – an entity responsible for identifying, promoting and developing innovative solutions to support the operational work of Member States – has devoted a report to deepfakes, these ultra-realistic video tricks that are therefore difficult to detect. Anyone’s face and voice can be imitated sometimes almost perfectly from a limited sample thanks to machine learning techniques.

Examples of known deepfakes are numerous. Two weeks after Russia invaded Ukraine, a deepfake of Volodymyr Zelensky, the Ukrainian president, was broadcast on the hacked Ukraine 24 news channel, as well as several social networks. He urged his population to surrender their arms. The authors of this trick have not yet been found (and may never be).

A technology widely adopted by criminals

For Europol, this generalization of deepfakes must be taken very seriously by all links in the repressive chain, from the police to the courts. In effect, “advances in artificial intelligence and the availability of large image and video databases mean that the volume and quality of ‘deepfake’ content is increasing“, writes the European agency in its report. More specifically, it is the development of generative adversarial networks (GAN) that has allowed the democratization of these tricks. It is a machine learning technique that stands out for his ability to “create”.

Europol notes that there are many use cases for deepfakes. Criminals can use it to manipulate information from the general public but also from companies. “As the volume of deepfakes increases, trust in authorities and facts is being undermined“, notes the report. The authorities also risk being fooled causing “poor decisions” to be made in the context of interventions or investigations. This technology can also be used to modify identity documents, disrupt markets financial institutions, promoting the online sexual exploitation of children, perpetrating extortion, fraud…

“An absolute priority”

As a result, “preventing and detecting deepfakes must be the top priority for law enforcement“Alert Europol. Although used to having to deal with false evidence, the police do not necessarily have the necessary technologies to detect such tricks. “Law enforcement agencies will not only need to train their staff to detect deepfakes, but also invest in their technical capacities in order to effectively meet the challenges ahead while respecting fundamental rights.“, concludes the report.

Tech companies and academic players are developing anti-deepfake tools. Microsoft and Peking University have thus presented two tools: FaceShifter is able to subsist faces while maintaining a set of characteristics (background, lighting, pose of the head, etc.) and Face X-Ray can precisely detect the part of an image that has been previously modified.

We wish to say thanks to the author of this write-up for this remarkable material

Preventing deepfakes must be a top priority for law enforcement, says Europol


You can find our social media profiles here as well as other pages related to them here.https://www.ai-magazine.com/related-pages/