European ministers agreed to a general approach to the regulation on artificial intelligence (AI Act) at the meeting of the Telecommunications Council on Tuesday (6 December). EURACTIV provides an overview of the main changes this entails.
The Artificial Intelligence (AI) Regulation is a legislative proposal to regulate AI technology based on its potential for harm. The Council of the EU is the first co-legislator to complete the first stage of the legislative process. The European Parliament should finalize its version around March 2023.
“The Czech Presidency’s final compromise text takes into account the main concerns of Member States and preserves the delicate balance between protecting fundamental rights and promoting the adoption of AI technology”said Ivan Bartoš, Czech Deputy Prime Minister for Digital Affairs.
Definition of AI
The definition of AI was central to the discussions, as it also determines the scope of the regulation.
Member States were concerned that traditional software would be included, and therefore proposed a narrower definition of systems developed by approaches based on machine learning, logic and knowledge. These elements may then be clarified or updated by the Commission through delegated acts.
General Purpose AI
General-purpose AI includes large-scale language models that can be adapted to perform various tasks. As such, it fell outside the scope of the AI Regulation, which only provided for goal-based systems.
However, member states considered that excluding these critical systems from the scope would have paralyzed the AI Regulation, while the specificities of this emerging market required some adaptation.
The Czech Presidency of the Council of the EU resolved the issue by instructing the Commission to carry out an impact assessment and consult on the adaptation of general purpose AI rules through an act implementation within a year and a half of the entry into force of the regulation.
AI regulations prohibit the use of the technology for subliminal techniques, exploiting vulnerabilities, and establishing social ratings (social scoring) similar to that used in China.
The ban on social rating has been extended to private actors in order to prevent the ban from being circumvented by a contractor. The concept of vulnerability has also been extended to socio-economic aspects.
High risk categories
In Annex III, the regulation lists the uses of AI that are considered to present a high risk of causing harm to persons or property and which, therefore, must comply with stricter legal obligations.
In particular, the Czech Presidency of the Council introduced a level, which implies that, in order to be classified as high risk, the system must be decisive in the decision-making process and not be “purely incidental”. It is up to the European executive to define this concept by means of an implementing act.
The Council removed from the list the detection of ” deepfakes » through law enforcement, crime analysis and verification of the authenticity of travel documents. However, critical digital infrastructure and life and health insurance have been added.
Another important change, the Commission will not only be able to add high-risk use cases to the annex, but also remove them under certain conditions.
In addition, the obligation for high-risk suppliers to register in a European database has been extended to users in public bodies, with the exception of law enforcement agencies.
Obligations relating to high-risk systems
High-risk systems will need to comply with requirements such as the quality of all data and detailed technical documentation. The Czech Presidency believes that these provisions “have been clarified and adjusted so that they are easier to implement technically and less burdensome for stakeholders”.
The general approach also aims to clarify the distribution of responsibilities along AI value chains and how the AI regulation will interact with existing sectoral legislation.
Member States have introduced into the text several exceptions for law enforcement authorities, some of which are intended to serve as” money change “during negotiations with the European Parliament.
For example, while users of high-risk systems will need to monitor the systems after launch and notify the vendor of serious incidents, this obligation does not apply to sensitive information derived from law enforcement activities.
On the other hand, the governments of States EU members seem less willing to make concessions on excluding national security, defense and military-related AI applications from the framework of the regulation. They also want police departments to be able to use “real-time” remote biometric identification systems in exceptional circumstances.
Governance and enforcement
The Council has strengthened the AI Council, which will bring together the competent national authorities, in particular by introducing elements already present in the European Data Protection Board (EDPB), such as the group of experts.
The general approach also provides for the Commission to designate one or more test centers to provide technical support for implementation and to adopt guidance on how to comply with the legislation.
Penalties for breaches of AI obligations have been eased for SMEs, while a series of criteria have been introduced so that national authorities can take them into account when calculating the penalty.
The AI regulations provide for the possibility of creating regulatory sandboxes (regulatory sandboxes), which are controlled environments under the supervision of an authority in which companies can test AI solutions.
The Council text allows these tests to be carried out under real conditions, whereas under certain conditions these real-world tests could also take place without supervision.
Transparency requirements for emotion recognition anddeepfakeshave also been reinforced.
[Édité par Anne-Sophie Gayet]
We would love to thank the writer of this short article for this awesome material
AI law: Member States adopt a common position
You can view our social media profiles here and other pages related to them here.https://www.ai-magazine.com/related-pages/