Responsible AI: Positive AI, a label to anticipate European requirements

Artificial intelligence not only raises ecological issues, robustness, but also ethical problems. Subjects on which the European and French institutions are looking. At the European level, it’s the AI ​​Act which should decide on the obligations of companies in this area.

“The European Commission has given a clear framework for responsible AI”, judge Laurent Acharian, marketing director at BCG GAMMA, during a press conference. “This framework should be applied from 2025. On a digital scale, 2025 is far, very far away: a lot can happen between now and then.”

“The European Commission has given a clear framework regarding responsible AI. This framework should be applied from 2025. On a digital scale, 2025 is a long way off”.

Laurent AcharianMarketing Director, BCG GAMMA

Responsible AI: taming algorithms on a daily basis

On a daily basis, and while waiting for the authorities implement this regulation, it is the companies that must work to find quick answers by following the recommendations already available. Especially since these organizations are faced with internal and external pressures to master the sensitive uses of AI.

In any case, this is the position of the members of the Positive AI initiative. This results from the association of BCG Gamma, an AI subsidiary of Boston Group Consulting, Orange France, L’Oréal and Malakoff Humanis at the start of 2022.

These organizations have made a simple observation: the operationalization of responsible AI is not an easy task.

“At Orange, we have worked on a algorithm recommendations from sports TV channel packages in order to best target our customers,” says Gaëlle Le Vu, Communications Director at Orange France. “The first version of this algorithm placed such a strong weight on gender that if we had “let the system do its thing”, I think that no more customers would have registered for these sports packages”, illustrates she.

At Malakoff Humanis, the problem is all the more complex as it uses its algorithms to deal with delicate subjects. For four years, the insurer has deployed an algorithm that allows its teams and doctors “to identify so-called suspicious or abusive work stoppages”, explains David Giblas, deputy director in charge (among others) of “data and digital” at Malakoff Humanis.

Even though the algorithm was designed in collaboration with doctors and decision-making is human, the insurer found that there were still too many false positives. Certain medical checks were not necessary. “There were age, location and gender biases. We tried to understand because the training data is real data,” says David Giblas. These data themselves contained biases. “It’s natural, it’s human,” he says.

Malakoff Humanis has therefore set up quarterly reviews of training data, algorithms and their results. And when the user decides not to follow the advice of the AI, the case is also studied.

And the four big companies are not the only ones concerned.

“According to a BCG study, 84% of companies surveyed believe that responsible AI should be one of the major topics for top management,” says Laurent Acharian. “When you take a closer look, only 16% of these companies believe they have a mature responsible AI program. There is a huge gap between intentions and reality.”

A label inspired by the recommendations of the European Commission

The Positive AI association wants to provide a space for exchanges for leaders and data scientist, a repository, then issue a label dedicated to responsible AI.

” [Positive AI] aims to assess the level of maturity of companies of all sizes and in all sectors in terms of responsible AI, to provide a tool to identify levers to then progress, and finally to allow them to obtain a label after an independent audit”, summarizes Laëtitia Orsini Sharps, president of the Positive AI association and director of consumer activities at Orange.

The repository is already in place. It was built on the “Key Principles ofAI as defined by the European Commission”. Before launching the initiative, some of the founders of the project implemented the EC recommendations published in 2019 in the guide “Ethics Guidelines for Trustworthy Artificial Intelligence” by the High-Level Expert Group on AI (AI HLEG).

“Malakoff Humanis was one of the first French companies in 2019 to implement this benchmark. Today, we use it to evaluate two-thirds of our algorithms,” says David Giblas.

However, the leader notes that the initiative of the EC, although accompanied by a community of exchanges, did not meet all the needs of his organization. “We wanted to go further: it wasn’t enough in terms of tools and sharing of best practices,” he says.

From the initial 109 questions in the AI ​​Trustworthy repository, Positive AI drew around 40 “more precise” evaluation criteria. About twenty question the company’s approach and its governance of AI. And around twenty checkpoints are used to evaluate the algorithms themselves. Positive AI focuses its assessment on three areas: justice and fairness, transparency and explainability, and human intervention. Technically, it involves setting up a form explainability – “minimum interpretability” according to Davig Giblas – of the most sensitive algorithms.

Clearly, Positive AI is interested in both business strategy, user practices, the design and governance of AI systems and algorithms “which represent the greatest ethical risks”.

“From a business perspective, responsible AI is at the service of people. It is non-discriminatory, fair, transparent, repeatable and explainable”.

David GiblasDeputy Director, Malakoff Humanis

“From a business point of view, responsible AI is at the service of humans”, defines David Giblas. “It is non-discriminatory, fair, transparent, repeatable and explainable”.

Beyond the words, “the repository must allow you to position yourself with the least possible subjectivity in relation to these questions”, he believes.

The repository was concocted by business experts and data scientists working with the founding members of Positive AI. The association was then supported by the consulting and auditing firm EY (Ernst & Young) to make this reference system auditable.

A committee of “external and independent” experts was then responsible for reviewing the reference system. Raja Chatila, Emeritus Professor of Robotics, AI and Ethics at Sorbonne University; Caroline Lequesne Roth, Lecturer in public law, and head of the Master 2 Algorithmic law and data governance at the University Côte d’Azur; and Bertrand Braunschweig, Scientific Coordinator of the Confiance.ai program are the three specialists in question.

EY will carry out audits at companies wishing to obtain the Positive AI label. It will offer “three gradual levels of certification”, depending on the maturity of the company.

Anticipating the costs of responsible AI

Beyond the audit, David Giblas believes that this “voluntary approach” requires a significant commitment from the teams that must be equipped with tools and from the management that must sometimes be educated.

If the leader does not state this clearly, there is also a financial cost to consider. In proposal of the European Parliament and of the European Council published on April 21, 2021 concerning the regulation of AI, the authors-MEPs estimated the cost of compliance “between 6,000 and 7,000 euros for the provision of a medium high risk AI system worth around 170,000 euros by 2025”. To this must be added human control costs estimated at between 5,000 and 8,000 euros per year per system, in addition to 3,000 to 7,500 euros per year for verification.

The European Union does not intend to legislate on low-risk systems. However, she encourages the companies that deploy them to come together to adopt a code of conduct and ensure that their AI systems are trustworthy. “The costs thus incurred would be at most as high as for high-risk AI systems, but most likely lower,” write the authors of the proposal to the European Parliament.

For its part, Positive AI did not present a price list. Payment for this service will be a la carte, depending on the number of algorithms evaluated and the size of the company. It is not a question of analyzing all the modules and AI systems in place: the founding members are aware that this mission is complex, if not impossible. Obtaining the label will only be possible from the beginning of 2023.

The European ambitions of Positive AI

“In mid-2023, we intend to launch a digital platform so that companies can carry out a self-diagnosis before even embarking on the labeling process”, announces Laëtitia Orsini Sharps.

“Dialogue, both with the companies that implement AI and both with the public authorities in France and in Europe, is absolutely key”.

Laëtitia Orsini SharpsPresident of the Positive AI association and director of consumer activities, Orange.

In the meantime, spaces for feedback and good practices will be organized by the association.

Later, it will be a question of bringing the discussions outside the scientific and entrepreneurial sphere. “Dialogue both with the companies that implement AI and both with the public authorities in France and in Europe is absolutely key”, assures Laëtitia Orsini Sharps.

The founding members of Positive AI know this very well. They are in advance of phase compared to the European requirements which are not yet fixed, the IA Act being in the course of elaboration.

Ariane Thomas, global technical director of sustainability at L’Oréal, praises these European and even international ambitions. “We have a real desire to expose the difficulties, the opportunities and the results obtained at this stage, knowing that this is an emerging subject and that we are going to develop the reference system so as to be more and more precise,” she says.

Positive AI is not the first initiative of its kind. Labelia, another responsible and trusted AI label was created in 2021 by the Labelia Labs association. The Numeum consortium has set up a manifesto, a practical guide and a community without however offering certifications. The Confiance.ai consortium focuses more on the notion of robustness of AI, that is to say ensuring that the defects of the algorithms do not cause significant financial, material or human damage.

We would love to thank the writer of this write-up for this amazing web content

Responsible AI: Positive AI, a label to anticipate European requirements


Check out our social media accounts and also other pages related to themhttps://www.ai-magazine.com/related-pages/