Artificial intelligence: technical standards and fundamental rights, a risky mix

The future European regulation “AI Act” aims to create a legal framework for all artificial intelligence (AI) systems, in particular those posing significant risks to security or fundamental rights, such as non-discrimination, privacy, freedom expression or human dignity.

In its current version, the draft regulation treats all these risks in the same way. A marking THIS (for “European conformity”, like the existing one), would indicate that the AI ​​system is deemed sufficiently secure to be placed on the market, whether it concerns physical security or vis-à-vis fundamental rights. The text presented attaches great importance to technical standards and risk analyzeshoping to be able to standardize evaluation methods.

Nevertheless, there persists a tension between the “risk-based” approach, proposed by the Commission, and the approach based on respect for fundamental rights, favored by the courts and by the works from Council of Europe.

Indeed, while it is easy to imagine a test bench for a safety criterion, like the tests carried out on children’s toys before they are placed on the market, it is more difficult to assess non-discrimination. this way, because it is contextual.

Testing discrimination: an impossible bet?

For a product to be placed on the market, suppliers must comply with certain technical specifications defined outside the text of the law, most often in harmonized standards.

Many countries are founding high hopes on these standards, which could help establish uniform technical and legal requirements for AI systems. For example, they will be able to define the criteria of quality, fairness, safety or evenexplainability for these systems, and even strengthen Europe’s strategic positioning in the global race for AI.

But how to develop neutral technical criteria around moral and cultural value judgments?

Some of these systems directly impact our right to non-discrimination. For example, the use of facial recognition by police departments has led to thearrest of several black men, wrongly recognized by an automatic system on CCTV cameras. In the same way, theamazon recruitment algorithm refused a woman’s CV more easily than a man’s.

To test for the presence of this type of discriminant bias, the current techniques consist of verifying the proper functioning of the system for different subgroups of the population, separating individuals by gender or skin color. But standardizing these test methods raises several difficulties.



Read more: Can AI be trusted?


First of all, the legislation of some countries prohibits the processing of ethnic data. Then, choosing which groups to test on is a political choice: who will decide the number of “skin color” categories to test? Ultimately, such a system can never be perfectly fair, as there are many approaches to non-discrimination, some of which are mutually incompatible.

An “acceptable” level of risk of discrimination?

Since it will be impossible to guarantee the absence of discrimination from all angles, choices will be necessary, and thresholds of tolerance will have to be defined. The question then arises: what is the acceptable level of errors and what type of errors are we talking about? Which amounts to asking: what is the “acceptable” level of risk of discrimination?

In other areas, it is common to define the acceptable level of risk using a quantitative approach. For example, for the safety of a nuclear power station, the risk of an accident will be quantified, and the power station may open if the risk is below a certain acceptability threshold. This threshold cannot be zero, because a “zero risk” approach would lead to the rejection of nuclear activity, which also provides benefits for society. The benefits of nuclear energy and the risks are weighed, and it is decided in practice on a “acceptable” risk threshold. Although this threshold can be debated, it is supported by scientific facts: the risks are relatively easy to define and measure.

[Plus de 80 000 lecteurs font confiance à la newsletter de The Conversation pour mieux comprendre les grands enjeux du monde. Abonnez-vous aujourd’hui]

What about AI systems? If we take up the question of discrimination – racial or gender, for example, a technical standard could never tell us what the “good” error rate is, or an “acceptable” level of performance differences between different groups of the population. , because the answer is too dependent on the context and the judgment of the people.

The objective of technical standards in the context of AI should rather be to define a common vocabulary, adapted tests, impact studies and more generally good practices throughout the life cycle of AI systems. This would provide a basis for comparing systems and promote discussion among stakeholders. This type of standard is called “information standards”as opposed to “quality” or “performance” standards.



Read more: “A $% sexist program”: how to detect and correct AI biases


For example, for a facial recognition system for border crossing, a technical standard could describe how to measure the accuracy of the system and how to measure performance differences between different population groups. This standard would not define what level of error and discrimination is acceptable, or which groups should be protected, as these choices are a matter for the application of the law and the judgment of the courts.

Who should decide on the standards?

Standardization bodies are currently working on these standards for AI, with particular attention to ethics. Note in particular the initiatives of the association IEEEthe first to publish a standard “for the consideration of ethical concerns when designing systems”, as well as various other standards on the transparencyor the data privacy.

Of the ISO standards are also being drafted thanks to different workgroups organized around this common theme of AI, and bringing together organizations from different countries, such as theAFNOR in France. This work is coordinated at European level by the CEN-CENELEC and his “AI roadmap”. In the United States, the National Institute of Standards and Technology (NIST) is responsible for comparing performance andequity facial recognition systems and recently published Guidelines to avoid bias in learning systems.

However, these deliberations on fundamental questions very often take place behind closed doors. Only authorized organizations can develop these standards and although they often call for outside participation, the accessibility of the work is limited. The current system forces a company to pay for access to the text of the standard, reducing the transparency of the process. How then can we guarantee that the technical specifications actually preserve our rights? Shouldn’t we rather ban the development of these opaque standards in favor of a procedure that is more open to citizen debate?

Certification, a double-edged sword

These standards can also turn out to be useless, even dangerous if they are badly used, for example if they are used to affix a certification claiming to protect against any violation of our freedoms. Indeed, the affixing of a CE marking only indicates compliance with the rules of the moment, declared by the manufacturer, but does not does not guarantee safety or the formal absence of discrimination.

This problem is compounded by the “presumption of conformity” created by certification: when a system follows a standard, its safety is less often questioned. It is therefore essential to construct a CE marking that does not relieve the suppliers and users of AI systems of their responsibility. The marking would then serve not to guarantee that a system is free from infringement of fundamental rights, but to guarantee that the company has put in place measures to limit them.

CE marking can be a great regulatory and governance tool when coupled with technical standards that help us speak the same language and compare systems to each other. In contrast, the decision of what constitutes an “acceptable” human rights risk can never be delegated to a technical standard. It is appropriate to reaffirm the responsibility of the providers and users of these systems, who must themselves define, in their context, the most appropriate choice to protect fundamental rights. This decision must be justified and may be challenged, but the final arbitration of the acceptability of the risk must be left to the legislator, the regulator and the judges.

We want to say thanks to the author of this post for this outstanding content

Artificial intelligence: technical standards and fundamental rights, a risky mix


Our social media profiles here and other pages on related topics here.https://www.ai-magazine.com/related-pages/