The Confiance.ai industrial consortium wants to anticipate the AI ​​Act Computerworld

Industrialists are stepping up to the plate with the Confiance.ai project. They want to meet the current needs of companies while betting on meeting the requirements of the future IA Act, a European regulation on the harmonization of standards in this area. Update on the progress of the program and the objectives to be achieved.

AI is a field to be mastered. This is the key word that emerges from the Confiance.ai event, which takes place from October 4 to 6 at CentraleSupélec on the Saclay plateau. The consortium of French industrialists, researchers and academics who launched the program of the same name, is meeting again to make an initial assessment of its work and the objectives that remain to be achieved. The initiative aims to develop and deploy a methodological and technological environment for the integration of AI into critical systems, first in France and then internationally. Launched as part of the 1time phase of the national AI strategy and funded by France 2030, the project is 2/3 funded by the State – i.e. 30 million euros – and 1/3 by industry.

The project which initially brought together 13 founders (Air Liquide, Airbus, Atos, Naval Group, Renault, Safran, Sopra Steria, Thales, Valeo, as well as the CEA, Inria, IRT Saint Exupéry and IRT SystemX) now has today around fifty partners for 300 people mobilized – the equivalent of 150 FTEs. Since July 2021, the project has therefore doubled its staff to meet the needs. 12 start-ups have also been recruited into the program. “It’s about injecting trusted AI into companies, arming them so that they are compliant,” says Bertrand Braunschweig, scientific coordinator of the Confiance.ai project at the SystemX Institute for Technological Research.

Four platforms to address AI issues

The collective first worked on the constitution of around twenty scientific states of the art relating to the various themes addressed by trusted AI, including monitoring, data and knowledge engineering. , symbolic artificial intelligence or the characterization of the notion of trust. Following this, the industrial partners of the project brought 11 first use cases resulting from real operational problems with a concrete repository of constraints, models, data and objectives on which the teams can base their work, test the various technological and methodological components identified in order to validate or not their relevance. The result of this collaboration: a first version of the trusted environment which was delivered at the end of 2021 and is already deployed within the engineering partners.

One of them, Safran, indicates that it has deployed the tool chain for this environment: “Our team installed this environment on one of our calculation servers in an “On Premise” logic. This operation is strategic for us because of the sensitive nature of our activities because we now have the opportunity to apply the bricks of this environment to our internal use cases by freeing ourselves from the use of a public cloud. We plan to evaluate in the coming months the interoperability of MLOps tools with the explainability and robustness tools developed by Confiance.ai”, explains Jacques Yelloz, chief engineer in the field of AI at Safran. To date, the environment offers four platforms dedicated to major AI issues: one devoted to data life cycle management (acquisition, storage, specifications, selection, augmentation), a set of libraries dedicated to the robustness and monitoring of AI-based systems, another platform dedicated to explainability, and finally a platform intended for the embeddability of AI components which must make it possible to identify the design constraints to be respected and supported throughout the implementation and this, until the deployment of the component in the system.

Contribute to the future AI Act

The project was born out of a real need to have reliable AI, particularly in sectors such as automotive, aeronautics, energy and defence. “Trust is an essential element in any system, including one based on artificial intelligence. Hence the birth of the project to respond to a certain number of industrial and societal issues,” explains Julien Chiaroni, director of the trusted AI challenge within the General Secretariat for Investment. At the same time, the program is also part of the forthcoming European regulation, the AI ​​Act, which aims to establish harmonization rules in this area. “The notion of sovereignty underlies the fields in which artificial intelligence will be integrated” specifies Bertrand Braunschweig. With this collective, France wants to weigh in the balance and have a say in future European regulations and the standards that will result from them. “There is a need to ensure the traceability, the explainability, the transparency of this AI” assures Julien Chiaroni.

In this sense, the consortium wants to establish cooperation at European level. Last July, the subject had already been mentioned on the international relations established. Thus, in Quebec, Confiance.ai has been working with an entire ecosystem for a year to show a partnership and have a local application. In Germany, ties have been strengthened and a Franco-German label on trusted AI is even to be launched this week. Among the manufacturers who join the project, we find Bosch, Siemens, SAP, as well as VDE, the German standards standardization institute. These companies provide unconditional support for the development of the project. “The Confiance.ai program is unique in the French panorama, it is the only one with an industrial vocation and which is built with the entire academic ecosystem of start-ups, research institutes, academics, industrialists, etc. The whole works because there is a provision of personnel from the partner companies of this program”, specifies David Sadek, president of the management committee of the program and in post at Thales.

Industry-specific use cases

This mobilization of actors also brings a very varied spectrum of use cases, as evidenced by Rodolphe Gelin, deep learning expert for autonomous vehicles and connected vehicles within the Renault group: “One of the use cases consists in checking the state of welds on a vehicle in order to check its condition”. By comparing two photos, the model makes decisions based on the image, and the human checks whether the result given by the AI ​​is correct. “Each partner can bring its own database and algorithms,” he adds. At Renault, another case, which caused difficulties, involved processing the company’s customer returns using NLP.

At Naval Group, also a program partner, the tests focused on the detection of anomalies on vibration sensors with collectors. The need: to reduce the number of sensors and make them smart without losing operational capacity. Conducted jointly with Sopra Steria, the project pushed to detect and classify anomalies discovered by AI and highlight patterns with specific properties. In total, a hundred software components are being designed to meet scientific and industrial challenges, says Bertrand Braunschweig. For the latter, it is a question of delivering everything quickly in order to be on time for 2024, date of publication of the AI ​​Act.

We would like to say thanks to the author of this write-up for this incredible material

The Confiance.ai industrial consortium wants to anticipate the AI ​​Act Computerworld


We have our social media profiles here and other pages related to them here.https://www.ai-magazine.com/related-pages/