The program “Hôtel du Temps” resurrects Dalida, and foreshadows the deep fake revolution

Behind Ardisson’s performance, Mac Guff studios. Its co-founder tells how the deep fake disrupts the job of image creator: “The professions are being invented. It’s a new El Dorado, a jungle”.

What if we brought the dead back to life using artificial intelligence… to interview them? This is the incredible bet that Thierry Ardisson took up on May 2 for an hour and a half with his program “Hôtel du Temps” by “reviving” a sacred monster of French song, Dalida. To make this possible, the host offered the services of French research behemoths, IRCAM for the voice, and the Mac Guff studios for the image. At its head, Rodolphe Chabrier, one of the veterans of digital imagery and VFX. With his studios, he accompanied Gaspar Noé, Matthieu Kassovitz, Jan Kounen or Michel Ocelot – and we owe the second studio Illumination Mac Guff, a subsidiary of Universal, the animation box Me, Despicable Me.

The technological masterstroke that is the program “L’Hôtel du Temps” is not the first feat of arms of the studios. Behind the rejuvenation of the faces of Mathieu Amalric and Aleksey Gorbunov in the series Le Bureau des Légendes, no make-up, but a deep learning tool, developed by Rodolphe Chabrier and his team, the “Face Engine”. A technology that reshuffles the cards of the discipline. “Tomorrow, we could make someone who dances like a foot dance, as well as Michael Jackson”, laughs Rodolphe Chabrier. So digital imaging is dead, long live deep learning? The answer is not so simple. On the other hand, it is indeed a revolution! Maintenance.

Your studio specializes in digital imaging. Since when when do you use artificial intelligence?

Rodolphe Chabrier: We’ve been interested in deep learning since 2018. I can’t tell you when we switched. But yes, now artificial intelligence is in the process of lastingly disrupting the profession but it is above all a subject with drawers. We chose to take it by working on faces by developing an artificial intelligence model that we called “Face Engine”. This work was first visible to the general public, in the last season of the Office of Legends – for which we received a technical Caesar, then also on the series Arsène Lupin for Netflix. (JJA and Karlov, the characters played by Mathieu Amalric and Aleksey Gorbunov play their own role, rejuvenated 30 years by the effects of the magic wand of the model developed by MacGuff, editor’s note). Their weathered faces are rejuvenated by the process.

This is a network tool called GAN which allows you to manipulate faces. We have developed our own tools using our own funds and with assistance from the CNC. Thanks to the globalization of knowledge and open-source, “Face Engine” is an aggregate of many AI models made available by the AI ​​research community and our digital imaging know-how, developed in more than 35 years of existence. I have no doubt that we will be overtaken by the patrol, but for the moment, we have a certain technological and organizational lead. The old world was about making movies like Tea irish man with hyper-sophisticated, heavy and extremely expensive 3D technologies. (In the film, Al Pacino and Robert de Niro are rejuvenated, editor’s note). Our plan was to do the same thing, but with tools based on deep learning. We were lucky that shortly before the start of the pandemic, Thierry Ardisson came to see us with his idea for a show, which confirmed that our approach was good.

Initially, your interest in AI did not interest many people?

Rodolphe Chabrier: She was perceived as our dancer, an “Internet thing” that would never be usable in broadcast mode (broadcast, standard real time for broadcast on TV or Netflix, for example, editor’s note). Yet even on smartphones were appearing applications based on AI models. Of course, what is produced on these formats cannot be used on a cinema production, but that indicates a trend. With my partner Martial Vallanchon, we threw ourselves headlong into it. With the COVID when everything stopped for the first three months, it allowed us to move forward continuously.

I can’t go into too much manufacturing detail. We don’t just have software, you press a button and go. This is the whole problem of artificial intelligence. These are black boxes, we don’t control anything but we have processes to have a minimum of control levers and above all, to be able to achieve exploitable and coherent things for the “broadcast”. On the other hand, it required us to invest a lot. The computing needs of CPUs (central processing units) are substantial, but those of GPUs, and therefore graphics processing, are colossal.

On the program “Hôtel du Temps” by Thierry Ardisson, you literally brought Dalida back to life. The image is staggeringly realistic. The deepfake is applied to the face?

Rodolphe Chabrier: Yes, in “Hôtel du Temps”, there is a base of deep fake (multimedia synthesis technique based on artificial intelligence which can be used to superimpose existing video or audio files on other video or audio files , Editor’s note). But it wasn’t that simple. AI models need big datasets. To obtain a very realistic rendering, it’s simple when you have a number of image sources, hundreds of hours of 4K images. For the project, we are talking about sources that are 50, 60 years old; we had to do a whole lot of processing work upstream so that the elements we have are compatible with deep learning tools. And I’m not just talking about working on the pixels so that the images become usable, but also about what makes a good dataset, what do we need…

And regarding the question about it being face work, yes, for now, with “Face Engine” the AI ​​works on the face (from images from a shoot with the host and an actress who plays Dalida and who learned the artist’s gestures, editor’s note). But we are already working to extend the process to the body (Body Engine), and even to the environment (Global Engine). We already see some video game sequences circulating: they have been passed through AI models, fed by databases from the City of Paris. The rendering is perfectly realistic. Tomorrow, we could dance someone who dances like a foot as well as Michael Jackson. Or recover all the data from a James Dean and make an actor like him work by putting him through the mill of artificial intelligence. This tomorrow is in two to three years. And in the longer term, we could imagine no longer having an actor at all and having a virtual character with the face and gait of James Dean. Well, that will obviously require even more colossal resources…

In fact, what is interesting with artificial intelligence is that we no longer create objects. Machines are created to create objects based on rules. These models must be understood in the sense of mathematical or climatic models, if you will.

Kind of like an operating system?

Rodolphe Chabrier: Not quite. They are not objects in themselves, but models capable of understanding the world. “Face Engine” understands what a face is, for example. To design it, we feed it with data, but above all, we have to train it. More than training, we must educate it. Just like raising children, poor parenting will make them rude or prejudiced. It’s the same thing. It is difficult to go back with an AI model or else you have to start almost from scratch, as the number of iterations is so high. Millions of loops are made. It can take hours, days, or even weeks to have a detectable result. And we must have an eye to understand if the path taken is the right one or not, to better correct the situation.

But is it compatible with the very tight production times of a TV show or a movie?

Rodolphe Chabrier: On the contrary! It’s totally compatible. The proof, we produced more than an hour of visual effects for Thierry Ardisson’s show. What takes time is the making of the model. Once it’s designed, it’s easy to do duration at colossal speed. Ordinarily, we speak of “produced seconds”. A graphic designer working alone on an hour of visual effects would take a year. Once the model is in place, you can enter 10 minutes and have the next day, a first result. I am exaggerating, of course. But that changes everything. In the case of work on a face, this means that we can start working even before shooting or validation of the plans.

Is this a new job for you?

Rodolphe Chabrier: Professions are being invented. It’s a new Eldorado, a jungle. I have the feeling of finding myself 35 years back when we were doing 3D with PCs. We will need data specialists who know how to retrieve, process and improve them; graphic designers; developers of these tools, but also AI calculation operators who will be educators, in fact, of AI, able to observe and rectify. As for me, I see myself a little like a chef who would have a brigade of multiple talents who have the intelligence of processing, calculation management, data integration. I suggest that we take this element, that we pass it in the oven, then in ice water or with a hair dryer, and we see if it worked or not. There’s something very organic about the way it works. Once the recipe has been found, the pattern is found. The art is to create the AI ​​model.



We want to give thanks to the writer of this article for this awesome content

The program “Hôtel du Temps” resurrects Dalida, and foreshadows the deep fake revolution


Explore our social media profiles as well as other pages related to themhttps://www.ai-magazine.com/related-pages/