An old mathematical paradox pushes artificial intelligence to question!

What is Artificial Intelligence?

Artificial intelligence is a fairly recent field that refers to the use of machines or automated systems that attempt to imitate human intelligence in order to perform different functions. We talk about artificial intelligence because these systems can improve as new information is collected. It is therefore a form of automated learning.

>> Read also: This is how artificial intelligence helps mathematicians

The chatbot is a well-known example of artificial intelligence. This is a small program that attempts to converse with a person for a few minutes to, for example, provide help on a website. Other types of AI frequently encountered in everyday life: recommendation engines used to recommend a particular television program based on viewer habits.

artificial intelligence is above all linked to the capacity for reflection and data analysis and aims to improve the capacities of humans in a fairly substantial way, which makes AI very precise in many areas. In the vast world of artificial intelligence, there are many terms that sometimes get misinterpreted. The best known of these are Machine Learning and Deep Learning. These two terms are often confused and misused. The term machine learning is used to talk about systems that learn and complement their performance based on the data they process. Deep Learning is used to process unstructured data like sound, image or text.

But if everything seems perfect, researchers have just discovered that AI can suffer from certain limits linked to a very old mathematical paradox!

Mathematics: a paradox that weakens the operating principle of artificial intelligence

© William Barton, Shutterstock

Alan Turing’s famous Enigma machine. He also demonstrated that mathematics is not always demonstrable!

Nowadays, artificial intelligence is mainly declined in the term mentioned above of machine learning. Machine Learning is a technique that allows artificial neural networks, but modeled on the functioning of the human brain, to learn in a fully automated way. For this, they must be provided with large amounts of data that serve as a basis for learning and they must deduce results.

These learning algorithms are used in many fields including voice recognition, image recognition or even the setting of various diagnoses.

Yet researchers have noticed a lack of reliability in some of these learning algorithms. They have identified a paradox that undermines the very principle of operation of artificial intelligence.

This limit stems from a mathematical paradox which dates back almost a hundred years and which was demonstrated by the British mathematician and cryptologist Alan Turing (1912-1954) and Kurt Gödel a naturalized Austrian American logician and mathematician (1906-1978). They demonstrated that mathematics cannot be completely demonstrable.

For these two mathematicians, there are indeed certain statements in the field of mathematics that it is neither possible to demonstrate nor to refute. Furthermore, algorithms cannot solve all computational problems. Finally, and to remain in astonishment, a coherent theory can in no way dismantle its coherence as long as it is sufficiently “rich”.

>> Read also: Big bang: artificial intelligence helps to understand the state of matter

Artificial intelligence algorithms may not exist for some problems

This paradox on the “undemonstrability” of certain mathematical statements has been introduced into the world of artificial intelligence. The researchers explain that there are inescapable limits inherent in the mathematical sciences, which have consequences in the field of artificial intelligence.

Because of this paradox, it is quite possible to create an artificial neural network, but its reliability cannot be assured with certainty. For many applications, this is not really a big deal.

On the other hand, there are a whole series of areas in which the slightest error, no matter how small, can constitute a real problem. For example, this type of error could have serious consequences in high-risk areas such as the diagnosis of certain diseases where artificial intelligence is increasingly used to help the doctor in his task.

The field of autonomous vehicles is another high-risk sector as well, where the consequences could also be very serious. Unfortunately currently, there is no way to know the degree of confidence of artificial intelligence in the face of a decision in certain areas when confidence should be the absolute priority.

It is not for this reason that researchers will abandon their work on machine learning and artificial intelligence. They must try to find new paths or develop new ones in order to develop systems capable of solving all problems transparently, reliably and which, in addition, would be able to recognize their limits.

>> Read also: An artificial intelligence solves the Schrödinger equation

Source: Matthew J. Colbrook, Vegard Antun and Anders C. Hansen, “The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale’s 18th problem”, March 16, 2022,

We would like to say thanks to the writer of this write-up for this amazing web content

An old mathematical paradox pushes artificial intelligence to question!


Find here our social media accounts as well as the other related pageshttps://www.ai-magazine.com/related-pages/