“The only danger of AI is its energy consumption”

Interview conducted in 2021.

FShould we be afraid of AI?

LJ: Absolutely not! It is only a tool that helps us to accomplish certain tasks. It’s like the hammer, which was meant to drive nails, until one day someone used it to smack the guy next door on the head. There is no need to be afraid of the hammer: it just requires regulation to specify its uses. Moreover, the term artificial intelligence is misleading: it was used from 1956 to summarize the ambition of researchers at the time, but it in no way reflects reality.

artificial intelligence is all the same a very powerful tool, which raises questions, starting with respect for privacy… What is your opinion?

LJ: It is essential that we citizens are educated and informed about what AI does. Knowing that we can be controlled thanks to facial recognition, as in China… It is then up to us to decide, knowingly, what private information we choose or not to share, and with whom. I, for example, accept that my fingerprints be recorded to access my gym, because it makes my life easier. Scientists, citizens and governments need to come together to define rules intended not to go too far, exactly like what was done for the control of nuclear weapons. At the risk, of course, of regulating too much and slowing down innovation. You have to find the right balance.

AI is also a formidable weapon of misinformation…

LJ: You talk about deepfakes on social media? Fake videos are fun, but that’s really low level AI. You have to learn to doubt what you see, to develop your critical spirit. Moreover, it is not new: the faking of the videos, that exists since the invention of the cinema. It is worth remembering that it is not the algorithm that makes social networks, but the people who feed them.

Should we be afraid of killer robots?

LJ: I can indeed create a robot that will shoot people. And it’s not even very complicated. But there again, it is up to us, at the level of society, to decide whether we accept it or not. The community must regulate the machines.

>> Read also: Robot soldiers, the temptation of a license to kill

HASBeyond the rules, these machines are not infallible: autonomous cars have accidents, for example…

LJ: The truly autonomous car will probably never exist, because it is impossible to adapt to all situations. When we humans have an accident, our reaction is simple: we try to save ourselves. And very often we don’t have time to react and it’s chance that decides. Unimaginable for the car! In the event of an accident, we will need explanations. However, AI is not transparent and its decisions are difficult to understand. But it is not impossible, contrary to what we sometimes hear: we know very well how these systems work and what variables influence their decisions. It’s just that there are so many interactions that it requires considerable effort. The search for solutions is in progress…

>> Read also: Autonomous car: a new simulation tool to differentiate driver behavior

Should we also fear the impact of AI on employment?

LJ: It is obvious that there are trades which will disappear…and that new ones will appear. However, those who will disappear will most often be arduous or daunting jobs. Most of the time, it will be better for us: the AI ​​will help us, relieve us, and leave us more time to devote ourselves to others, to personal activities, to hobbies…

artificial intelligence could it one day surpass us and escape us?

LJ: You have to understand that contrary to what some say – usually people who don’t know the subject well -, the AI ​​does nothing on its own: its actions can never overwhelm us. Let’s be realistic: what we know how to do today is completely stupid. Certainly, the machine can beat us in areas for which it has been trained: chess, the game of go… It can even find winning moves that had never been considered. But that’s not intelligence: she didn’t invent it, she can just calculate thousands of strokes per second. In the end, it does little better than the pascaline, the calculating machine that Pascal had developed in the 17th century. To create an intelligence comparable to ours, it would be necessary to develop an infinity of artificial intelligences. Assuming that we know how to do it, the main problem would not be that it takes control over us.

What would be the main danger with AI?

LJ: The real danger is the energy consumption of this technology. We are engaged in a vast trend towards dematerialization, accelerated by artificial intelligence. But we do not realize that at the current rate, we are going straight into the wall. To play go, Deep-Mind, Google’s AI, consumes nearly 440,000 W per hour, the equivalent of a small data center. The human brain only consumes 20 W/h… and it does many other things besides playing go! The digital economy, with internet, networks, data storage and Blockchain technologies, already accounts for nearly 20% of global electricity consumption. Even though only 50% of humanity today has access to the internet.

For future development to be sustainable, we will have to move quickly from big data to small-data . Rather than centralizing everything in huge data centers which consume half of their energy to ensure their cooling, we will have to try to decentralize, to consume less energy, and produce it locally. And change the way we design algorithms, so that they always manage to do more while consuming less. In the meantime, choices may have to be made. Running AIs to detect cancers and save lives, OK. But such a burst of energy to play go or StarCraft , Nope. AI must be at the service of man and relieve him in his daily life. Not that she locks him up in a virtual universe or mortgages his future.

>> Read also: Streaming, bitcoin, AI… energetic delirium!

We would love to thank the author of this write-up for this amazing content

“The only danger of AI is its energy consumption”


You can find our social media profiles here , as well as other pages related to them here.https://www.ai-magazine.com/related-pages/