The Artificial Intelligence (AI) Regulation co-rapporteurs have proposed extending the scope of the regulation to metaverse environments that meet certain conditions. The latest amendments also addressed risk management, data governance and documentation of high-risk systems.
European Parliament co-rapporteurs Dragoş Tudorache and Brando Benifei circulated two new batches of compromise amendments, consulted by EURACTIV, on Wednesday (28 September), ahead of the technical discussion with the other political groups on Friday (30 september).
These latest batches introduce significant changes to the scope, purpose and requirements of the regulation for high-risk AI systems regarding risk management, data governance and technical documentation.
A new article has been added aiming to extend the scope of the regulation to operators of AI systems in specific metaverse environments that fulfill several cumulative conditions.
These criteria are as follows: the metaverse must have an authenticated avatar, be designed for large-scale interaction, allow social interactions similar to those in the real world, allow financial transactions in the real world and assume the existence of risks for health or fundamental rights.
The scope has been extended from AI providers to any economic operator placing an AI system on the market or putting it into service.
The text clarifies that the regulation does not prevent national laws or collective agreements from introducing stricter obligations intended to protect workers’ rights when employers use AI systems.
At the same time, AI systems intended solely for scientific research and development are excluded from the scope.
The question of whether any AI system that could interact with or impact children should be considered high risk, as some MEPs have called for, has been postponed.
Additionally, the centre-right MEPs’ amendment to narrow the scope for AI providers or users in a third country was also reserved for future discussion as it relates to the definition, the memo says. in the margin of the document.
Purpose of the settlement
The rules provided for by the regulation should not only be intended for the positioning of the AI market, but also for its development. The objectives of harmonizing rules for high-risk systems and supporting innovation with particular attention to innovation have also been added.
The amendment by centre-left MEPs, led by Mr Benifei, aimed at introducing principles applicable to all AI systems was “suspended”, according to a comment in the margin of the text. Similarly, the debate over the governance model, of an EU agency or an improved version of the European Council on Artificial Intelligence, has also been put on hold.
Requirements for high-risk AI
The compromise amendments stipulate that high-risk AI systems must comply with AI law requirements throughout their lifetime and take into account the most advanced and relevant technical standards.
The issue of considering foreseeable uses and misuses of the system in the compliance process was also put on hold. This will indeed be discussed alongside the topic of general-purpose AI — that is, large-scale models that can be adapted to a variety of tasks.
As for the risk management system, the MEP specified that it could be integrated into the existing procedures put in place within the framework of sectoral legislation, as is the case in the financial sector, for example.
The risk management system will need to be updated whenever a significant change occurs in high-risk AI “to ensure its continued effectiveness. »
The list of elements that risk management should take into account has been extended to the areas of health, legal and fundamental rights, the impact on specific groups, the environment and the dissemination of disinformation.
If, after the risk assessment, AI providers consider that there are still relevant residual risks, they must provide the user with a reasoned opinion on why these risks can be considered acceptable.
The compromise amendments provide that, for high-risk AI, techniques such as unsupervised learning and reinforcement learning that do not use validation and test datasets must be built on the database of training datasets meeting a specific set of criteria.
The intention here is to prevent the development of biases, and it is reinforced by the requirement to consider potential feedback loops.
Additionally, the text indicates that the validation, testing, and training datasets should all be separate, and that the legality of the data sources should be verified.
Wording has been introduced to give more latitude to SMEs to comply with the obligation to keep technical documentation on high-risk systems, after approval by national authorities.
The list of technical information has been significantly expanded to include information such as the user interface, how the AI system works, expected inputs and outputs, cybersecurity measures, and carbon footprint.
We wish to thank the author of this write-up for this outstanding material
MEPs propose extending the scope of AI rules to the metaverse
Explore our social media profiles along with other pages related to themhttps://www.ai-magazine.com/related-pages/