Between ethics and laws, who can govern artificial intelligence systems?

We all started to realize that the rapid development of AI was really going to change the world we live in. AI is no longer just a branch of computer science, it has escaped from research labs with the development of “AI systems”, “software that, for human-defined purposes, generates content, predictions, recommendations or decisions influencing the environments with which they interact” (european union definition).

The issues of governance of these AI systems – with all the nuances of ethics, control, regulation and regulation – have become crucial, as their development today is in the hands of a few digital empires like them Gafa-Natu-Batx… who have become the masters of real societal choices on automation and on the “rationalization” of the world.

The complex fabric intersecting AI, ethics and law is then built in power relations – and connivance – between states and tech giants. But the commitment of citizens becomes necessary, to assert other imperatives than a solutionism technology where “everything that can be connected will be connected and streamlined”.

An ethics of AI? The main principles at an impasse

Of course, the big three ethical principles allow us to understand how a real bioethics has been built since Hippocrates: the personal virtue of “critical prudence”, or the rationality of rules which must be able to be universal, or the evaluation of the consequences of our actions with regard to happiness general.

[Plus de 80 000 lecteurs font confiance à la newsletter de The Conversation pour mieux comprendre les grands enjeux du monde. Abonnez-vous aujourd’hui]

For AI systems, these main principles have also been the basis of hundreds of ethics committees: oath Holberton–Turing, Montreal declaration, Toronto statementProgram of theUnesco… and even Facebook ! But the AI ​​ethics charters have never yet resulted in a sanction mechanism, or even the slightest reprobation.

On the one hand, the race for digital innovations is essential to capitalism to overcome the contradictions in the accumulation of profit and it is essential to the states to develop the algorithmic governmentality and one unexpected social control.

But on the other hand, AI systems are always both a remedy and a poison (a pharmacy in the sense of Bernard Stiegler) and they therefore continually create different ethical situations which are not based on principles but require “complex thinking”; a dialogic in the sense of Edgar Morin, as shown in theanalysis of ethical conflicts around the health data platform Health data hub.

AI right? A construction between regulation and regulation

Even if major ethical principles will never be operational, it is from their critical discussion that AI law can emerge. The law comes up against particular obstacles here, in particular scientific instability of the definition of AI, the extraterritorial aspect of digital but also the speed with which platforms are developing new new services.

In a development of AI law, we can then see two parallel movements. On the one hand, regulation by simple directives or recommendations for a progressive legal integration of standards (from technology to law, such as cybersecurity certification). On the other hand, genuine regulation through binding legislation (from positive law to technology, such as the GDPR regulation on personal data).

Power relations… and complicity

Personal data is often described as a coveted new black goldbecause AI systems have a crucial need for big data to power statistical learning.

In 2018, the GDPR became a real European regulation of these data which had been able to take advantage of two major scandals, the NSA’s Prims spy program and the Cambridge Analytica hijacked Facebook data program. The GDPR even allowed activist lawyer Max Schrems in 2020 to invalidate all transfers of personal data to the United States by the Court of Justice of the European Union. But the reports of complicity between states and digital giants remain numerous: Joe Biden and Ursula von der Leyen are constantly reorganizing these data transfers disputed by a new regulation.

The Gafa-Natu-Batx monopolies guide the development of AI systems today: they control possible futures through “predictive machines” and handling of attentionthey impose the complementarity of their services and soon the integration of their systems in the Internet of Things. The states react to this concentration.

In the United States, a lawsuit to force Facebook to sell Instagram and WhatsApp will open in 2023, and a modification of the antitrust legislation will be voted on.

To read also:
Europe proposes rules for artificial intelligence

In Europe from 2024, the regulation on digital markets, theAMD act, will regulate acquisitions and prohibit “large access controllers” from self-referencing or bundled offers between their various services. As for the Digital Services Regulation, theDSA actit will oblige the “big platforms” to be transparent about their algorithms, to quickly manage illegal content and it will ban targeted advertising on sensitive characteristics.

But the collusion remains strong, because everyone also protects “their” giants by brandishing the Chinese threat. Thus, under threats from the Trump administration, the French government had suspended the payment of its “Gafa tax” yet voted by parliament in 2019 and the tax negotiations continue within the framework of the OECD.

A new and original European regulation on the specific risks of AI systems

Spectacular progress in pattern recognition (both on images, texts, voices or localizations) creates prediction systems that present increasing risks to health, safety or fundamental rights: manipulation, discrimination , social control, autonomous weapons… After the Chinese regulation on the transparency of recommendation algorithms in March 2022, the adoption of the AIA actEuropean regulation on artificial intelligence, will be a new step in 2023.


European risk classification of AI systems.

Yves Meneceur, 2021

This original legislation is based on the degree of risk AI systems, in a pyramid approach similar to nuclear risks: unacceptable, high risk, low risk, minimum risk. Each level of risk is associated with prohibitions, obligations or requirements, which are specified in the appendices and which are still the subject of negotiations between the Parliament and the Commission. Compliance and sanctions will be monitored by the competent national authorities and the European Committee on Artificial Intelligence.

Citizen engagement for AI rights

To those who consider the involvement of citizens in the construction of an AI law as a utopia, we can first recall the strategy of a movement such as Amnesty International : advancing international law (treaties, conventions, regulations, human rights tribunals) and then use it in concrete situations like that of the Pegasus spyware or the ban on autonomous weapons.

Another successful example is that of the movement None of your Business (that’s none of your business): advance European law (GDPR, Court of Justice of the European Union, etc.) by filing annual hundreds of complaints against privacy violation practices by digital companies.

To read also:
recognitionfacial birth, from phone unlocking to smass surveillance

All these collectives of citizens, who work to build and use AI rights, have very diverse forms and approaches. Since the European associations of consumers who file a complaint together against the management of Google accounts, up to the saboteurs of 5G antennas that refuse the total digitization of the world, passing through the inhabitants of Toronto who frustrate the great project of smart city Google or doctors activists free software that want to protect health data…

This highlighting of different ethical imperatives, both opposed and complementary, corresponds well to the complex thought ethics proposed by Edgar Morin, accepting resistance and disruption as inherent to change.

We would like to give thanks to the writer of this article for this outstanding material

Between ethics and laws, who can govern artificial intelligence systems?


You can view our social media profiles here and other related pages herehttps://www.ai-magazine.com/related-pages/