Credit: the excesses linked to artificial intelligence in the sights of the European Union

People who apply for credit do not always know it, but their file is often filtered by an artificial intelligence (AI) algorithm which assesses their risk of default. Of course, if this risk is low, credit will be more easily granted. “Today, all granting models are based on regressions which are typically data learning algorithms”, told us a data-scientist at PwC. The principle is simple: the machine induces rules from a history and for an objective defined by a human, then applies them to new cases.

However, individuals whose application has not been accepted do not always understand why their application was refused or, above all, whether it was for good reasons. What’s worse is that the advisor sometimes doesn’t know that very well either, because the models used can include a dark side. We talk about “black boxes”. The calculations are so complex that even engineers cannot isolate the decisive parameters with certainty (the most opaque deep learning models are not used in France, however, according to our interlocutors).

It’s annoying, because the algorithms can have biases. For example, a model is likely to frequently deny credit to 30-year-olds and never to 57-year-olds. Why ? Simply because the files of the 30-somethings studied by the machine during its training were numerous, so the probability of finding faults there as well, while there were only one or two examples of 57-year-old candidates and that ‘they have each time paid their credit in full (a mistake so gross that it is generally anticipated, but other biases can be more pernicious).

Obligation to better anticipate risks

Fortunately, the General Data Protection Regulation (GDPR), in application since 2018, still provides that humans must give an interpretation of the causes of a particular result. A customer is entitled to request “the logic behind [le] automated processing » (§63). However, a draft European regulation currently under discussion aims to accentuate this degree of interpretation and to better detect the risks of discrimination before implementation: AI Act.

“To use models in the banking context, it will be necessary to obtain a CE labelexplains Viriginie Mathivet, manager at TeamWork and doctor in AI, at a conference in September. This label requires a file to be compiled which contains both analyzes on the explicability of the models, but also analyzes on the database which was used to train the machine, maintenance and monitoring plans, etc. » This does not mean that the explanations given will be sure and certain (impossible with certain models), but that they will be highly probable – engineers speak moreover of interpretability than of explainability. “Just because the tools tell you there is no bias doesn’t mean there isn’t”, continues Virginie Mathivet. According to her, one way to make sure of this as much as possible is to multiply the interpretations and then check that they overlap well.

Partial ban on social credit

The fear of some credit applicants is also to see data totally unrelated to their finances being taken into consideration in the evaluation of their file. In China, social credit systems already make it easier for individuals to access loans based on their behavior in their daily lives or online. A distasteful Communist Party post on social media or a bad attitude on the plane can limit your grade. The AI ​​Act is also supposed to prevent this kind of abuse. Is thus prohibited (art. 5): “The use, by public authorities or on their behalf, of AI systems intended to assess or establish a classification of the reliability of natural persons over a given period according to their social behavior […] in social contexts dissociated from the context in which the data was originally generated or collected. »

In other words, there is no question of using information from totally different spheres of life in another. However, the text only excludes the use of social credit “by public authorities” and says nothing about its use by private companies. They certainly couldn’t use the data that is under the control of the authorities, but what about other data?

“If we imagine that, tomorrow, a bank judges that our Facebook connections are unsavory and uses this information to assess our solvency, we will have an outcrysays lawyer Sophie Guicherd, doctor of law and member of the ethics and artificial intelligence chair at the University of Grenoble. This is not in the spirit of the preparatory work of the regulation, which is instead aimed at ‘trusted AI’ and it would threaten fundamental freedoms. »

More broadly, the hypothesis that credit institutions include in their calculations increasingly broad data to determine who can obtain credit or not, including social information, is nevertheless not categorically excluded by the lawyer. : “I’m not saying it won’t happen, but the settlement proposal doesn’t go in that direction. »

Jean Barrère, partner at Accuracy and co-author of the New horizon of digital transformation (Dunod, 2022), gave us a very different opinion last year during an interview: “We are moving more or less towards this kind of model. But it will require individuals to shed some of that veil of ignorance that is only present when a banker lends money; he does not know precisely everything about an individual’s situation. » This does not necessarily imply that the advisor is more or less accurate without AI, but what level of detail is needed for an algorithm-assisted decision to be? This is the difficult question that still awaits European legislators, who should meet in the fall, or, if they do not settle it, the courts for possible case law.


We wish to say thanks to the author of this article for this remarkable content

Credit: the excesses linked to artificial intelligence in the sights of the European Union


We have our social media pages here and other pages related to them here.https://www.ai-magazine.com/related-pages/