Should the transparency and explainability of algorithms be reinforced? -Forbes France

Having become essential at a time of digitalization of our society, AI arouses as much hope and admiration as concerns. The reason: the performance of the algorithms is privileged to the detriment of their transparency and explainability. This situation creates a certain mistrust or even distrust of consumers and citizens, leading at the same time to great confusion in the understanding of the issues when we talk about trusted AI.
By
Pascal MONTAGNON, Director of the Digital, Data Science and Artificial Intelligence Research Chair – OMNES EDUCATION, and, Eric Brown, Associate Professor – INSEEC Bachelor

With the widespread use of automatic decision-making systems, whether based on logic models (expert systems, etc.) or statistics (machine learning, deep learning…), it is important to be able to explain, interpret and confirm the results obtained by the algorithms, especially since they are inexorably imposing themselves in the daily lives of individuals: banking transactions, tracking down tax and social fraud, new methods recruitment, traceability of navigation on digital equipment, facial recognition, increased control of employee productivity, granting of bank loans, chatbotsor the selection of students in higher education (the algorithm used to Parcoursup highlighted the lack of transparency of the algorithms used by universities to enroll students), etc.

Until recently, algorithms and resulting decisions based on mathematical formulas and equations were strongly influenced by developers, their culture, their way of seeing things. These rules now fall more “hidden” properties in the data from which the algorithms will learn on their own depending on the volume and quality of the data processed. These algorithms are reputed to be difficult to understand, both for their expert developer and for the lay user, because their optimization procedures are precisely at odds with the methods of human reasoning, and cannot be easily reproduced identically.

Even if these algorithms remain at the origin, always more or less under developer influencethe resulting decision may largely escape it, thus leaving all the problems due to the complexity, the opacity and sometimes the non-comprehension of the solutions proposed by the algorithm.

From a mathematical point of view, the search for a good machine learning model emphasizes the minimization of a cost function or the maximization of a likelihood function. Thus, the performance of the model is measured almost exclusively on the results against correctly chosen metrics. This trend has led to the creation of increasingly sophisticated and complex algorithms to the detriment of their explainability. It is also true that what makes machine learning algorithms difficult to understand is also what makes them excellent predictors.

To fully understand, it seems important to us at this stage to present below the conceptual diagram for creating an application promoting theMachine Learning (ML)and in particular the selection of the type of data, its processing, as well as the choice and combination of the so-called explanatory variables which are the source of ontological choices that can generate manipulations.

The diagram thus described dissects the various stages of the design of an application proceeding by AM. As in any system includingAI, data is the basic ingredient that must serve the desired model from its creation to its final use and eventually its explainability. A point of vigilance should be highlighted at this moment: ontological constraints in terms of format and processing, themselves generated by design choices, can influence the behavior and therefore the explainability of this application.

From expert systems to decision trees, random forests and classifications, and more generally all Machine Learning models based on symbols and operations can be interpreted without great difficulty. All can be expressed in an analytical way, with the key to successions of variables, with different weights, or even answering to different logical rules.

This is why the notions of transparency, interpretability and explainability raise as many questions as concerns.

First let’s talk about transparency.

The transparency of algorithms occurs in various contexts, and receives various explicit definitions which deserve an effort of clarification.

The first difficulty encountered is the polysemy [1] of the word transparency reinforced by the development ofAI in recent years, a subject that has become very sensitive. The transparency of an algorithm can refer to two types of properties [2] : extrinsic, such as loyalty and fairness, or Intrinsicas interpretability and explainability.

Depending on the type of property, the notion of transparency can vary. But a basic principle is essential: an algorithm is, and must be fundamentally imagined designed like a machine. Let’s not lend him a capacity to have racist, discriminatory or even sexist reactions. However, it should not be ignored that the use of algorithms can of course produce effects of this order in the execution of its programming, generally beyond the effects desired by its designer. It should be noted that sometimes, among the existing literature on the subject, it happens that one distinguishes a confusion between the terms “algorithm” and “program” which are misused. There is an intuitive distinction between these two terms. An algorithm is a mathematical approach describing a procedure; the program meanwhile, is a technical approach, which uses an algorithm in a given programming language. Although it is not easy to distinguish between these two approaches given the diversity of programming languages, one cannot deny the heuristic role that this distinction brings to the universe of AM andAI in general.

Whatever the programming used and the associated algorithm, the operating principle remains based on the exploitation of the data. The input data are known but their exploitation carried out throughout the AM will gradually lead the developer to feel dispossessed of the control of the process as regards the perfect assimilation of the output data. Thus, the automated decision resulting from the exploitation of these algorithms has become opaque. This is referred to as a “black box”. This phenomenon is accentuated by the corpus of data used which continues to grow and reinforces the use of algorithmic solutions as the quantity of data to be integrated has become voluminous, making it unlikely that a human brain can take it into account. It then becomes essential to understand the criteria taken into account behind their decision proposals resulting from AI algorithms. The need for trust and transparency becomes necessary and requires a real reorientation of the issues related to Machine Learning, in particular the need to explain them. [3].

Interpretability or explainability?

The needs for transparency and trust in Machine Learning algorithms, such as neural networks or reinforcement learning mechanisms, have thus brought to light two concepts that should not be confused and should be clarified. : interpretability and explainability.

Interpretability is to understand the reasoning of the Machine Learning algorithm and the internal representation of the data used in a format understandable by an individual. The result obtained is obviously related to the quantity, the quality, the perfect knowledge of the data used and also of the model retained. [4]. In other words, interpretability answers the question “how” does an algorithm make a decision (which data taken into account, which calculation methods, etc.).

Explainability as for it, it consists in providing information using a language accessible to any user, whoever he is and whatever his level of knowledge or expertise on the subject in question. [Gilpin et al., 2018]. The vocabulary used is adapted according to the situations to the target of those who will receive this information. In other words, explainability answers the questions “What are we talking about?” “, ” What is it used for ? », « Why and for what purpose do we do it? “.

Can we, or should we, explain or interpret everything? Is this even desirable?

The answer does not come naturally. Of course, it is necessary to demystify these “black boxes” and increase transparency to justify the results obtained by Machine Learning. However, should the interpretability and explainability of algorithms be limited? Obviously not. We are now talking about trusted AI which goes far beyond these two concepts.

While until now, the development of AI was based on software bricks installed on centralized computer servers (cloud), the next steps, already visible to all, consist of directly embedding AI as close as possible to its use. , from connected objects to systems, including industrial installations.

To do this, this embedded AI must meet many criteria including, among others, performance, sobriety and confidence. Performance and sobriety are so important to embed AI in the operation of often critical systems, making it possible to considerably reduce communication and associated energy requirements while imposing strong constraints on the performance of electronic components. This trusted AI locally embedded (as close as possible to the user) must in particular guarantee safety (absence of failures presenting risks for individuals, property or economic activity), security (cybersecurity, protection of the data), the robustness and reliability of the decision mechanisms of AI applications.

In concrete terms, trust applies to all the building blocks of AI systems to ultimately justify trust to its users.

To conclude, as you will have understood, artificial intelligence algorithms have different levels of opacity. Depending on the type of algorithm, some are more easily explained, others remain ostensibly opaque. Depending on the qualification and the quantity of the data used, the interpretability leading to the explainability will be easy or extremely complex despite the algorithmic training.

So, should we strengthen the transparency and explainability of algorithms? The desired transparency on an algorithm does not guarantee that we can systematically achieve its explainability.

However, we believe that it has become necessary to explain all the bases that were used to build an algorithm that would allow us to understand in an optimal way how it works both in terms of relevance (but which would doubt?), and ethics by avoiding all forms of discrimination. This method, called fairness in English, is still in its infancy.

Finally, let’s not forget a basic principle: in AI as in statistics, there will always be a generalization bias, the effects of which should be minimized as much as possible.


[1] Polysemy: an adjective designating a word that has several meanings.

[2] Mael Pégny, Issam Ibnouhsein. What transparency for machine learning algorithms? . 2018. hal-01877760

[3] Kate Crawford and Trevor Paglen, 2019 – Excavating AI: The Politics of Images in Machine Learning Training Sets

[4] Gilpin et al, 2018 – Explaining Explanations: An Overview of Interpretability of Machine Learning – Cornell University

Article written by:

Pascal MONTAGNON – Director of the Digital, Data Science and Artificial Intelligence Research Chair – OMNES EDUCATION

Eric Brown – Associate Professor – INSEEC Bachelor

<<< Also read: Artificial intelligence: Should we be wary of algorithms? >>>

We wish to give thanks to the writer of this write-up for this awesome web content

Should the transparency and explainability of algorithms be reinforced? -Forbes France


You can find our social media profiles here , as well as other pages on related topics here.https://www.ai-magazine.com/related-pages/