Yann Le Cun (Meta): “Artificial intelligence will not be able to function without emotions”

Yann Le Cun, 62, has been director of artificial intelligence research at Meta (Facebook, Instagram, WhatsApp, Oculus, etc.) since 2013. The Frenchman was awarded the Turing Prize in 2019, the equivalent of the Nobel Prize in Mathematics, with his colleagues Yoshua Bengio and Geoffrey Hinton, for his work on deep learning (deep learning).

What vision of artificial intelligence do you have for Meta in the next ten years?

You may have seen Her (She, in French), Spike Jonze’s film released in 2013. The story tells of the hero’s relationship with Samantha, his virtual assistant, with whom he falls in love. This is where we are going. At the moment, we don’t have the technology to make the Samantha from the film. We don’t even have the science to make such a smart machine. But eventually, we want to introduce virtual agents who reside in our augmented reality glasses. In everyday life, they will be able to complete our information. They can remind you to look right and left before crossing a street, if you forgot to do so, and if a car is coming, you will be warned to get back on the sidewalk. The virtual agent will tell you where you left your keys and, when you go to a foreign country, the translation of your conversations will be displayed on your glasses in real time.

Is this really a society we want?

Yes, because we are overwhelmed by a monstrous amount of information that is growing exponentially. We don’t know how to get out of it. For example, I can’t read all my emails. We will need digital assistants that will sort, select what is relevant, important, fun, educational, etc. This will be a great source of progress on the way we interact with the digital world, but also with each other. But we are not there yet. It will take years.

A Google researcher says he has detected a consciousness in an artificial intelligence, is it possible?

We are very far from an artificial intelligence that reaches this level. We are missing a critical piece to replicate the intelligence seen in animals and humans. Today, an alley cat has far more common sense than the most powerful intelligent system. Certainly, these computer systems are impressive, especially those that communicate by text or dialogue. They have the appearance of intelligence but it is superficial. They have reasoning skills but very constrained. The limit of these intelligences is that they have no experience of the real world, of the physical world.

Could artificial intelligence feel emotions?

Today no, tomorrow yes. In the first months of life, you learned to predict the physical consequences of your actions. This allows you to plan, so to predict the effects of your actions. The essence of intelligence is the ability to predict. So we are going to build machines that will eventually be able to plan actions. For that, it will be necessary that these machines have goals, objectives: to go somewhere, for example. For us, it’s simple: open a door, walk, take the metro, we don’t need to think about it. But right now, machines can’t break down a task. I myself have worked on an architecture that would allow this to be done. Then, to have a goal, the machine will have to be able to anticipate whether a result will be good or bad: it goes through emotions. This is the reason why artificial intelligence will not be able to function without emotions. If one day we have autonomous intelligence systems, they will have emotions. It’s a bit controversial. Not everyone agrees with me. But in my opinion, it is inevitable.

What are your latest advances in the field of artificial intelligence? What impact do they have on how Meta works today?

There have been giant leaps in understanding the language, with translations accelerating. This is due to a combination of techniques. In this area, research is very open, new ideas spread very quickly. There has been a real revolution in the way of teaching language to machines. We take a large text, from 500 to 1000 words, and replace 10% to 15% with white markers. Clearly, it’s a fill-in-the-blank text like exercises for children. This is how the system trains itself to understand the nature of the text. For example, “The cat is chasing the… in the kitchen”: the system must find the word “mouse”.

“We will always need human moderators, to deal with the subtleties. Reasonable remarks will be deleted because the software will not have been able to grasp the second degree.

Has this had an impact on the moderation of content on Meta platforms?

For the past two years, these tools have been used by Meta for content translation and moderation. Five years ago, the proportion of hate speech automatically detected by artificial intelligence systems was around 30%. Most were flagged by users, then reviewed by human moderators who decided whether or not to keep them. Today, just over 96% of hate speech is automatically removed by artificial intelligence. This is called “preemptive deletion”. The rest, i.e. less than 4%, is made up of content reported by users. Of course, there are always too many hateful posts. But their proportion has dropped drastically.

Can this system apply to the 7000 languages ​​and dialects spoken around the world?

It works better in some languages ​​than in others. But these new methods have enabled another revolution: detecting hate speech in several hundred languages, thanks to the same network. No need to program the machine to a specific language.

In Burma, Facebook has been used in particular to convey hatred towards the Rohingyas. Why did Meta let it happen?

In Burma, moderating has long been complicated because you had to hire people who speak Burmese. Except the Burmese government doesn’t like Meta very much, so that was a big deal. We then had translation software that first translated from Burmese to English. It was then up to the English-speaking moderator to spot the hateful remarks. Now, all of this is done without going through the English language.

Has the progress of the machine made it possible to solve the problems raised by the whistleblower Frances Haugen? This former leader rightly criticizes Facebook for a drastic lack of moderators speaking the language of each country…

In 2014-2015, there was relatively little content moderation. Meta has made tremendous progress in the past two years in detecting keywords, especially when it comes to pedophilia. It’s far from perfect, but there’s a real willingness from the company to tackle the problem.

Today around the world, 40,000 people work on safety and security issues for Meta’s platforms. Does this mean that one day we will no longer need a human moderator?

We will always need human moderators, to deal with the subtleties. There can be violent speeches that pass through the AI ​​drops. Conversely, quite reasonable remarks will be deleted in the name of a warrior metaphor, because the software will not have been able to grasp the second degree.

As he plans to buy Twitter, Elon Musk has denounced the number of robots hiding behind network users. Are fake accounts at Meta a problem?

In the first half of 2022, Meta deleted 1.6 billion fake accounts. In general, we have fewer robots than Twitter because Meta is primarily a network of friends whose motivation is to share, hence the interest in using one’s true identity, or at least being recognizable. People there are also much more moderate in what they say.

We would love to say thanks to the author of this write-up for this outstanding web content

Yann Le Cun (Meta): “Artificial intelligence will not be able to function without emotions”


Visit our social media profiles and also other pages related to themhttps://www.ai-magazine.com/related-pages/