AI Law: Czech EU Presidency proposes tailored requirements for general-purpose AI

The Czech Republic wants the Commission to assess how best to adapt the requirement of the law on artificial intelligence (AI) for all uses of this technology, according to the latest compromise text seen by EURACTIV. Other aspects covered are law enforcement, transparency, innovation and governance.

The compromise, released on Friday (September 23), completes the Third Review of the AI ​​Act, a landmark proposal to regulate artificial intelligence using a risk-based approach. The document will be discussed at a meeting of the band telecommunications and society I’information» September 29.

General purpose AI systems

How to approach general purpose AI has been a widely debated topic. These systems, such as large language models, can be adapted to perform various tasks. Thus, the supplier may not be informed of the end use of his system.

The question is whether general-purpose AI must comply with the application of the regulation in the event that it can be used or integrated into high-risk applications. During discussions in the EU Council, several countries deplored the absence of an assessment of what the direct application of these obligations might entail in terms of technical feasibility and market development.

The Czech Presidency has proposed that the European Commission adapts the relevant obligations through implementing acts within one and a half years of the entry into force of the Regulation, through public consultation and an impact analysis on the best way to take into account the specific nature of these technologies.

However, the Czech EU Presidency believes that these future obligations for general-purpose AI systems should not apply to SMEs, as long as they are not partners or linked to larger companies.

In addition, the EU executive could adopt additional implementing acts detailing how providers of high-risk general-purpose AI systems must comply with the scrutiny procedure.

In cases where vendors do not envision a high-risk application for their general-purpose system, they would be exempt from the corresponding requirements. If providers become aware of misuse, the trade-off requires them to take action commensurate with the seriousness of the associated risks.

The compromise reduces the Commission’s discretion to adopt common technical specifications for high-risk, general-purpose AI systems.

Law application

A set of provisions has been included in favor of law enforcement authorities.

The Czechs proposed to extend the registration in the public database of the provider of high-risk systems to all public bodies using this type of AI, with the notable exception of law enforcement authorities, border control, migration or asylum.

Furthermore, the obligation to report to the supplier of a high-risk system the identification of serious incidents or to provide information for post-market surveillance would not apply to sensitive operational data related to the activities of the law enforcement agencies.

Furthermore, the Market Regulator would not be required to reveal sensitive information when informing its peers and the Commission that a high-risk system has been deployed without a conformity assessment via the emergency procedure.

The article imposing confidentiality on all entities involved in the application of the AI ​​Regulation has been extended in order to guarantee the protection of criminal and administrative proceedings and the integrity of information classified under Community or national law.

Regarding the testing of new AIs in real conditions, the requirement for the subject to give informed consent has been waived for law enforcement services, provided that this does not have a detrimental effect on the subject.

Transparency requirements

In terms of transparency, if an AI system is intended to interact with humans, the person must be informed that it is a machine, unless it is obvious “from the point of view of a reasonably well-informed, observant and circumspect natural person”.

The same obligations apply to biometric categorization and emotional recognition AI systems, with the only exception in all of these cases for police investigations. Despite this, in this case, the disguise must be “subject to appropriate safeguards for the rights and freedoms of third parties. »

Measures in favor of innovation

The list of AI ecosystem players involved in regulatory sandboxes (“regulatory sandboxes”) has been expanded to include “stakeholders and relevant civil society organizations”.

As regards the support activities that Member States will have to put in place, Prague plans to include the organization of training courses originally intended to explain the application of the AI ​​Regulation to SMEs and start-ups. , as well as local authorities.


Within the European Committee on Artificial Intelligence, which will bring together all the competent national authorities of the EU, the Czechs propose to create two sub-groups which would constitute a platform for cooperation between market surveillance authorities.

Wording has been added that would empower the Commission to carry out market assessments linked to the identification of specific issues that would require urgent coordination between market surveillance authorities.


Prague believes that when setting penalties, EU countries must take into account the principle of proportionality for non-professional users.

The compromise specifies which violations would lead to an administrative fine of 20 million euros or 4% of a company’s turnover. These include violations of obligations relating to suppliers, importers, distributors and users of high-risk systems, as well as requirements relating to notified bodies and legal representatives.

The percentage has been lowered for SMEs and start-ups from 3 to 2% of annual turnover.

We wish to give thanks to the author of this write-up for this outstanding web content

AI Law: Czech EU Presidency proposes tailored requirements for general-purpose AI

Visit our social media profiles as well as other pages related to them