Europe, future global hub of artificial intelligence?

For a little over a year, the European Union has been working on a bill on artificial intelligence. It aims to impose itself in this field by promoting innovation, while regulating this technology to protect citizens.

Despite its potential and usefulness in several areas such as health or transport, artificial intelligence (AI) still raises fears about the risks it presents in certain applications. Several countries are seeking to regulate this technology, like China. On April 21, 2021, the European Commission in turn presented a proposal of rules and actions in order to “to make Europe the global hub of trustworthy artificial intelligence”. It is supposed to ensure that citizens can trust the AI ​​they use.

“By setting the standards, we can pave the way for ethical technology worldwide, while safeguarding EU competitiveness. Future-proof and conducive to innovation, our rules will apply when strictly necessary: ​​when the security and fundamental rights of EU citizens are at stake.”, explained Margrethe Vestager, Executive Vice-President of the European Commission. But what are these rules and where is the proposal to regulate AI since last year?

A risk-based bill

The European Commission’s proposed regulation follows a risk-based approach. It aims to address those related to specific uses of AI, using a categorization composed of four levels. The first, “unacceptable risk”, concerns systems seen as “a clear threat to people’s security, livelihoods and rights”. Thus, AI applications seeking to deprive users of their free will by manipulating human behavior (toy with voice assistance encouraging a minor to behave dangerously, etc.) or even social rating systems would be prohibited. .

The “high risk” category includes technologies used in various fields, such as employment, which can be dangerous. In the professional sector, CV sorting software for recruitment procedures can, for example, be discriminating. These systems would be required to comply with obligations in order to be placed on the market. Among other things, it would be mandatory for the information to be “clear and adequate for the user”.

The “limited risk” level includes obligations in terms of transparency for systems such as chatbots. Their users should thus be informed of the fact that they are interacting with a machine so that they can decide whether or not to continue their interaction knowingly.

“The AI ​​Act is very, very different, both from the Digital Markets Act and the Digital Services Act. (…) it’s much more about shaping the future in this approach of looking at the use cases of AI and that makes it, I think, more difficult to manage. »

Margrethe Vestager

Executive Vice-President of the European Commission

Finally, the last category, “minimal risk”, involves allowing the free use of applications such as video games or AI-based spam filters. Considering that they pose little or no risk to the rights or safety of citizens, the Commission does not foresee any measures in this area.

These rules are associated with a coordinated plan on AI, which notably defines the strategy “to accelerate investments in artificial intelligence technologies to foster a resilient economic and social recovery”. This would require the establishment of conditions conducive to the development and adoption of AI within the EU (data sharing, etc.).

Weak progress justified and criticized

A year later, the European Commission’s AI bill is still at the draft stage. It has therefore made little progress, especially compared to other bills such as the Digital Markets Act and the Digital Services Act introduced at the end of 2020 and recently passed. But, for Margrethe Vestager, this slowness is justified: “The AI ​​Act is very, very different, both from the Digital Markets Act and the Digital Services Act. In the Services Act and in the Markets Act, they have drawn on experience we have gained from competition cases, unfair commercial practices, the behavior of our platforms and the way they provide their services. When it comes to AI law, it’s much more about shaping the future directly in this approach of looking at the use cases of AI and that makes it, I think, more difficult to manage »said the Commissioner during a AI and Technology Summit organized by Politico.

The slowness of the AI ​​regulation proposal is however criticized by other political figures. This is particularly the case of MEP Axel Voss who, in addition to believing that the EU is too slow, criticizes the project itself: “We are happy to regulate something here and something there, and to improve a bit, but there is no overall plan or concept (…) It is always good that all our Member States have a strategy of AI, but how does it relate? We have to unite, we have to set up European projects, otherwise I don’t see how we’re going to be a kind of serious competitor.”did he declare.

“Predictive policing should be added among the prohibited practices, as it violates the presumption of innocence as well as human dignity. »

Preliminary report on AI for the European Commission

Dragos Tudorache and Brando Benifei

EU Member States have their own AI strategy. In France, it was announced by Emmanuel Macron in 2018, with a budget of 1.5 billion euros. More recently, in November 2021, the government unveiled a €2.2 billion plan to make the country “a champion of artificial intelligence”by fighting against the shortage of French talent in this field and by investing in research.

Between innovation and regulation

Concretely, the proposed AI law has mostly been the subject of reports indicating what it should entail. The concept of competition was also mentioned in one of them, published last November. Axel Voss specified that the EU had fallen behind in this “global technology race” dominated by the United States and China, and that it must intensify its efforts to catch up with them. He believed that future regulation should focus on the potential of AI, giving more room to innovation than to regulation. “If we only think in this dimension of protection and prohibition and so on, we are not creating innovative ideas, we are not creating innovative technology”he explained.

Other members of the European Parliament consider, on the other hand, that certain AI applications should be prohibited. In April, MEPs Dragos Tudorache and Brando Benifei finalized a report preliminary on AI, in which they indicate that they want to ban police forecasting or predictive policing. It consists of using analytical techniques, in particular data, to prevent crime. This predicts the likelihood that a person will commit a crime or re-offend. “Predictive policing should be added among the prohibited practices, as it violates the presumption of innocence as well as human dignity”says the report.

They also want to extend the list of high-risk applications to cover systems designed to interact with children, deepfakes or even algorithms with a potential impact on democratic processes. According to Brando Benifei, further amendments will be added to the proposal in a next phase with other colleagues from the European Parliament, including Axel Voss. This future stage will perhaps be an opportunity to find the right balance between innovation and regulation, which promises to be complicated for the moment, despite the EU’s desire to offer reliable AI to citizens while not preventing the development of these technologies.

We would love to thank the writer of this short article for this remarkable material

Europe, future global hub of artificial intelligence?


Check out our social media accounts as well as other related pageshttps://www.ai-magazine.com/related-pages/