Will banking regulation extend to the use of artificial intelligence?

[AVIS D’EXPERT] Devices using artificial intelligence are beginning to multiply in the financial sector. With what risks and what regulations? Decryption with our expert Guillaume Almeras, founder of the monitoring and advice site Score Advisor.

In 2020, the Financial Conduct Authority (FCA), which is the supervisory body for financial institutions in the United Kingdom, created with the Bank of England a Public-Private Forum on Artificial Intelligence (AIPPF), which has just published its first report.

This consultation effort is based on the observation that devices using artificial intelligence (AI) are beginning to multiply in the financial sector with a speed and proliferation likely to raise fears that the banks themselves will soon lose control. . It is in this that the approach seems particularly interesting because if AI of course gives rise to numerous and important debates in itself, it is the first time to our knowledge that the question of its governance has been raised in the financial field.

Although remaining at a very general level at this stage – the approach has only just begun – the reflection outlines the outlines of a supervision of AI devices likely to ensure control. It starts with tracing and the attributes of the data used. New risks are then identified, linked in particular to the extreme speed of processing – on the financial markets, algorithmic trading provides a now well-known example.

Then comes the essential issue: the complex relationships between the variables used and their interpretation by automated systems. In other words: are we always able to know exactly what the algorithms determine, according to which principles and with the risk of being exposed to which biases? A question that becomes particularly complex when AI systems incorporate layers of self-learning and are networked. In short, how to avoid finding yourself one day incapable of explaining and understanding what will come out of the machines?

Risk of Autonomous Automated Decisions

Because these are and will be increasingly used to make decisions that are binding on customers and counterparties. From today, it happens more and more often that your account manager does not know at all why, for example, the use of your online bank card has been blocked. Tomorrow, will no one be able to find out why you were refused credit?

The main risk with AI, rightly points out the AIPPF report, is to arrive at autonomous automated decision-making processes, not in the sense that the decision would be delegated to them but in that where no decision-maker would no longer be able discuss it and correct it!

The AIPPF therefore calls for management accountability and it is reasonable to believe that this opinion will generally prevail. Don’t financial institutions have an interest in developing such controls on their own, moreover, without waiting for the regulations to oblige them to do so?

Complexity, speed and risks of irreversible commitments and over-reactivity: some forty years ago, the rise of increasingly sophisticated operations and market instruments already put financial institutions in a situation quite comparable. Specific supervision of market risks has therefore had to be developed, although this will not have avoided major episodes of crisis on the markets and numerous setbacks for the banks. Let’s hope that we will be more reactive to algorithms! Because their use will affect our daily lives much more.

By Guillaume Almeras, founder of the monitoring and advice site Score Advisor

We want to thank the author of this short article for this awesome content

Will banking regulation extend to the use of artificial intelligence?


Take a look at our social media accounts along with other related pageshttps://www.ai-magazine.com/related-pages/