EU Fundamental Rights Agency warns against biased algorithms

The European Union Agency for Fundamental Rights (FRA) released a report on Thursday (8 December) detailing how the biases developed in the algorithms apply to predictive policing and content moderation models.

The study concludes by calling on MEPs to ensure that these artificial intelligence (AI) applications are tested for biases that could lead to discrimination.

This study comes as the proposal for a regulation on AI (AI Act) takes its course in the legislative process. In particular, the European Parliament is considering the introduction of a fundamental rights impact assessment for AI systems presenting a high risk of harm.

“Well-developed and tested algorithms can bring a lot of improvements. However, without proper controls, developers and users run a high risk of negatively impacting people’s lives.”said FRA Director Michael O’Flaherty.

” There is no miracle solution. But we need a system to assess and mitigate bias before and during the use of algorithms to protect people from discrimination. »

Predictive policing

The agency addressed the risk of discriminatory policing, which involves allocating police forces based on biased criminal data, which can lead to over-distribution or under-distribution of police forces in some areas. Both situations can have serious repercussions on the fundamental rights of citizens.

The problem with AI-powered tools for law enforcement is that they rely on historical crime data that can be skewed by other factors. Thus, lack of trust in the police may result in a low tendency to report criminal activity, for example.

The agency even questions the validity of this approach, pointing out that “decades of criminological research have shown the limits of such an approach, as police databases do not constitute a complete census of all criminal offenses and do not constitute a representative random sample”.

The main fear of AI is that it can learn on its own. From then on, it can create what are called “loops” ofenslavement” reinforcing pre-existing biases. For example, less police presence in an area could lead to fewer crime reports, causing the area to be even less serviced in the future, and so on.

The AI ​​regulations have specific requirements for high-risk systems that use loops ofenslavement. However, although they apply to predictive policing, location-based systems are not covered.

Similarly, the European law enforcement data protection directive imposes specific safeguards for automated decision-making relating to persons, but not to geographical areas.

Another issue noted is that these algorithms used for predictive policing are largely proprietary (closed-source»), so very little information about how they were formed is public.

At the same time, law enforcement officers do not always have the information and skills to understand why a particular decision was made. Therefore, it is not possible to perform a critical evaluation that would detect errors or biases.

One mitigation measure proposed by the agency is to conduct victimization surveys by selecting a random sample to ask about their experience with crime. Some technical solutions are also proposed to prevent machine learning from providing extremist models or using statistics to downplay overly strong predictions.

A coalition of more than 160 NGOs believes that migration is a particularly sensitive area for AI-powered predictive analytics tools. Indeed, it could lead to abusive border control measures and discriminatory profiling systems.

Their open letter indirectly criticizes the position of the EU Council, which provides significant exceptions for law enforcement and border control services. At the same time, discussions in the European Parliament have moved towards banning predictive policing.

Content moderation

The second object of study is the risk of ethnic and sexist bias in automated tools for detecting offensive remarks.

In these cases, there is a significant risk of false alerts, due to the use of terms such as Muslim or Jewish, because the system does not take into account the context in which these terms are used.

More advanced methodologies, using word correlations from other data sources, may help mitigate this problem, but only to a certain extent.

Additionally, these methodologies pose some challenges, as they rely heavily on general-purpose AI, which can also be biased. Therefore, instead of completely removing biases, they may reinforce them or introduce new ones.

The researchers also looked at two other European languages ​​with gendered terms, German and Italian. The results show that content detection tools perform significantly worse than those used for English, while gendered languages ​​present additional gender discrimination issues.

Evaluation of algorithms

The agency suggests that these algorithms should not be used without a preliminary assessment of the bias they give rise to by disadvantaging people with sensitive characteristics. This assessment should make it possible to conclude whether the system is suitable for its purpose.

These assessments should be conducted on a case-by-case basis, not only before the AI ​​system goes live, but also during its lifecycle.

These assessments of potential discrimination however require data on protected characteristics, for which legal guidance on how this data collection is permitted and how it will interact with existing legislation such as the EU Directive on the implementation of the principle of equal treatment.

[Édité par Anne-Sophie Gayet]

We want to give thanks to the writer of this post for this awesome material

EU Fundamental Rights Agency warns against biased algorithms


You can find our social media profiles here as well as other related pages herehttps://www.ai-magazine.com/related-pages/