AFNOR publishes a strategic roadmap on the standardization of artificial intelligence and invites companies to contribute

As part of France 2030, the French government has launched a Grand Challenge aimed at “ Securing, certifying and making systems based on artificial intelligence more reliable », piloted by the General Secretariat for Investment (SGPI) and financed by the Future Investment Plan. AFNOR, the French standardization association, was mandated to ” create the normative environment of trust accompanying the tools and processes for the certification of critical systems based on artificial intelligence”. It recently published a strategic roadmap where it presents six essential axes for the standardization of AI.

Developing trusted AIs is essential. To this end, the French government has committed 1.2 million euros, under the Investments for the Future Program (PIA) and the France Relance plan, to facilitate the creation of consensual and accepted standards on a global scale.

Cédric O, Secretary of State for the Digital Transition and Electronic Communications, also declared:

“Trusted AI, i.e. AI for critical systems, is now necessary in many fields such as autonomous cars, aeronautics or space. »

Launched by the SGPI, the Grand Challenge “Securing, making reliable and certifying systems based on artificial intelligence” aims to build tools guaranteeing the trust placed in products and services integrating AI and served as a technical framework for the proposal for European regulations on AI of April 21. The SGPI has developed an approach based on three pillars: research, applications and standardization.

This last pillar is entrusted to AFNOR, which has many players in the AI ​​ecosystem, with the aim of creating synergies in France, with other countries within the framework of the International Organization for Standardization (ISO) as well as other international consortia.

To structure the ecosystem, the association will set up a cooperation platform between French AI players, strategic actions in standardization and develop European and international cooperation.

A national misunderstanding of standardization

French companies do not all understand the strategic importance of standards, in particular start-ups, SMEs and ETIs, which, insufficiently integrated into the standardization ecosystems, do not measure the stakes. The economic actors seem to lose interest in it whereas they are worried about the application of the regulations and compliance.

Experts from the companies concerned contribute directly to the development of standardization rules, at national level as well as at European and international level, which will serve as technical support for European regulations.

This European regulation is a continuation in Europe of the Data Governance Act presented in November 2020, the GDPR active since 2018 or the study of the role of AI in the Green Deal, carried out by the European Parliament.

AFNOR’s roadmap

260 French AI players took part in the consultation carried out in the summer of 2021 to establish this AI standardization strategy. All companies in the ecosystem will be able to participate in the development of standards within the standardization committees.

Patrick Bezombes, president of the French standardization commission, assures:

“The contribution is not reserved for large groups, quite the contrary. Start-ups and SMEs are an essential link in the ecosystem, they must make their voices heard and give their point of view: the orientations chosen will have a direct impact on them, right at the heart of their business. »

The roadmap has 6 axes:

  • Developing standards of trust

The priority characteristics to be standardized are safety, security, explainability, robustness, transparency, fairness (including non-discrimination). Each of these characteristics will be the subject of a definition, a description of the concept, the technical requirements and the associated metrics and controls, especially security.

  • Develop standards on governance and management of AI

All new AI applications have risks: poor data quality, poor design, poor qualification. A risk analysis for AI-based systems is therefore essential. Companies will have to set up quality and risk management systems. Within the framework of ISO/IEC work, two standards are being developed relating to:
• An AI quality management system: ISO 42001 (AI management System);
• An AI risk management system: ISO 23894.2 (AI Risk management).

They could impose themselves on a global level just like ISO 9001, which is now an international reference in terms of quality management, and become harmonized European standards.

  • Develop standards on AI monitoring and reporting

The place of humans from the design of AI systems to their use is essential. It is therefore necessary to ensure that they are controllable, that humans will be able to supervise them and regain control at critical moments when the AI ​​goes out of its nominal operating range.

Reporting processes will make it possible to trace major incidents in order to deal with them in real time before they spread. In the event of incidents and accidents, audits will be carried out on the products and on the standards on which they are based.

  • Developing certification body competency standards

Certification bodies will need to hire, train their staff, and have AI systems assessment methods and tools in place to maintain trust in products, increasingly technical processes, and services. They will have to ensure that the companies have set up processes for the development and qualification of AI systems, but also that the products comply with the requirements, in particular regulatory ones.

  • Develop the standardization of certain digital tools

The synthetic data provided by simulation allows the specification, design, training, testing, validation, qualification and audit of AI systems. The simulation will allow a great repeatability of the tests and thus to better understand and explain certain behaviors of the AI ​​systems. Simulation will be used more and more and new standards in terms of qualification of simulations, interoperability of simulations and objects (digital twins) will have to be put in place.

  • Simplify access and use of standards

A consultation platform will soon be opened and adjusted as needed.

We would love to give thanks to the writer of this post for this awesome web content

AFNOR publishes a strategic roadmap on the standardization of artificial intelligence and invites companies to contribute

Take a look at our social media profiles as well as other related pages