The future Czech Presidency of the Council of the EU has shared a working document with other EU governments to gather their views on the definition of artificial intelligence (AI), high-risk systems, governance and national security.
The document, obtained by EURACTIV, will serve as the basis for discussion in the telecommunications working group on July 5, with the aim of providing an updated compromise text by July 20. Member states will then be invited to provide written comments on the new compromise by 2 September.
“The Czech Presidency has identified four major unresolved issues that require further discussion and where it would be crucial to receive instructions from Member States in order to take the negotiations to the next stage”indicates the document.
This document is the first of the Czech Presidency, which will only officially begin in July. The project indicates continuity with the direction taken by the French Presidency of the Council of the EU (PFUE) and provides the main topics on which the Czechs will focus.
The internal document notes that a “ large number from EU countries have questioned the definition of what constitutes an AI-based system, saying the current definition is too broad and ambiguous, and therefore risks covering simple software as well.
Furthermore, a related question is to what extent the Commission should be able to modify, through secondary legislation, Annex I of the Regulation, which defines the techniques and approaches of artificial intelligence.
The Czech Presidency proposes various solutions to address these concerns.
The more moderate option is to maintain the Commission’s proposal or opt for the wording proposed by the PFUE, adding some elements of clarification by including references such as learning, reasoning and modelling.
In this scenario, the EU executive retains its delegated powers, and changes can only be made through an ordinary legislative procedure.
The other possibilities involve a narrower definition covering AI systems developed either solely by machine learning techniques or by machine learning and knowledge-based approaches.
In this case, Annex I is deleted and the AI techniques are integrated directly into the text, either in the preamble of the law or in the corresponding article. The Commission would only have the power to adopt implementing acts to clarify the existing categories.
High risk systems
Schedule III of the AI Act includes a list of AI applications considered to be at high risk to human well-being and fundamental rights. However, for some Member States the wording is too broad and the use cases covered should only be those for which an impact assessment has been carried out.
In this case, the most prudent option is to maintain the text according to the French compromise.
On the other hand, EU countries can argue for the removal or addition of certain use cases or to make the wording more precise.
The Czech Presidency also proposed to add an additional element, namely high-level criteria to assess what, in practice, constitutes a significant risk. Suppliers would then self-assess whether their system meets these criteria.
Another way to narrow the classification would be to distinguish whether the AI system provides fully automated decision-making, which would automatically be high risk, or if it only informs human decisions.
In the latter case, the system would only be considered high risk if the information generated by the AI is meaningful in decision-making. However, the Commission will have to clarify what constitutes a considerable contribution through secondary legislation.
EU countries are asked to consider whether the Commission should retain the power to add new high-risk cases to the annex, whether it should also be able to remove use cases under certain conditions or whether these powers should be removed.
Governance and implementation
Several EU countries have expressed concern that the “too decentralized governance framework at the national level” of the regulation could limit the effective application of the legislation, in particular because they fear not having sufficient capacities and expertise to enforce the rules of AI.
At the same time, the Czech Presidency notes that the legislation should offer “a certain level of flexibility for the law and national specificities”and “delegating enforcement powers to a more central level also requires careful practical and budgetary considerations”.
As clarified by the French Presidency, the current governance framework follows the EU Market Surveillance Regulation, with national authorities in charge, an AI Committee for coordination and Commission interventions are limited to extreme cases.
Another way of doing this would be to provide more support to Member States by setting up EU test centres, an expert group and an emergency mechanism to speed up aid.
The AI Committee could also be strengthened to assist national authorities and have a more explicit mandate based on the Medical Devices Regulation by guiding and coordinating market surveillance activities.
Finally, the Commission could be empowered to open direct investigations in exceptional circumstances, but this “has considerable practical and financial implications. »
National Security Exemption
The document indicates that a large majority of EU countries want AI applications related to national security and military uses to be explicitly excluded from the AI regulation, but believe that this concept is not yet sufficiently defined.
“This Regulation does not apply to AI systems developed or used exclusively for military or national security purposes”can we read in the existing compromise, which could be modified in order to delete the word“exclusively”although this may be a source of ambiguity.
Another solution would be to modify the wording to refer only to AI systems placed on the market or put into service for military or national security purposes, thus excluding the development phase.
We wish to thank the author of this article for this remarkable material
Czech EU Presidency sets way forward for AI law talks
Check out our social media profiles as well as other related pageshttps://www.ai-magazine.com/related-pages/