The legislation is due to be published in 2022. It introduces new transparency rules for human capital management IAs. And penalties ranging from 2% to 6% of turnover for offenders.
In its draft AI Act, Europe qualifies AI in the service of human resources as being “high risk”. Result: these applications will have to comply with strict rules of transparency. Failure to comply may result in financial penalties ranging from 2% to 6% of turnover. The subject is not to be taken lightly. The publication of the AI Act should take place in 2022, and the sanctions may apply 24 months later. Hence the importance for both publishers and user companies to prepare for it now.
“The first actors impacted by this legislation will be the publishers”, anticipates Patricia Chemali-Noël, senior expert in personal data protection at ESN Umanis. “They will have to declare their high-risk AI and provide the associated documentation. A register will be created by the European Union to federate everything. They will also have to cooperate with the competent authorities to respond to any request and immediately take corrective measures if necessary. appropriate.” Many HR solutions are concerned. LinkedIn, which draws up a ranking of talents via its offer targeting recruiters (Recruiter), is among the most famous.
Publishers on the front line
But many other software are targeted. This is particularly the case of Workday, as well as applications focused on talent management such as TalentSoft (Cegid group), Clustree (Cornerstone), You-Trust or 365Talents. Within these solutions, the machine learning typically makes recommendations for career development, from internal recruitment to training. Recommendations that can take into account the career of the employee, the needs of the company, as well as the careers and choices of employees with related profiles, who are distinguished by their performance.
Another requirement of the AI Act is to avoid replicating discriminatory patterns present in historical data used to train models. An obviously key element for HR-oriented AI. “Launched at the initiative of Orange, the GEEIS-AI label offers a framework to reduce sexist algorithmic biases, particularly in terms of remuneration and recruitment”, recalls Maud Ayzac, senior manager at the consulting firm Wavestone. Implemented under the Arborus Endowment FundGEEIS-AI is supported by Carrefour, Danone, EDF, L’Oréal and Sodexo.
“The AI should be transparent and its results explainable. This will allow the end user to understand the reason for an HR decision.”
“Alongside the issue of bias, the AI Act will involve providing clear and adequate information for users”, continues Patricia Chemali-Noël at Umanis, who refers to the GDPR as a working basis to address this issue. And Maud Ayzac to insist: “The AI must be transparent and its results explainable. This must allow the end user to understand the reason for a decision concerning him.” Explainability is all the more important in the case of HR-oriented AI used in recruitment and career management.
For Maud Ayzac, some HR solutions publishers are more inclined to adopt such an approach. The consultant notably mentions Neobrain.io, another specialist in talent management, or even Domoscio, an expert in adaptive learning.
A cascade of responsibilities
Based on off-the-shelf HR software, more and more companies are also developing extensions aimed at creating totally personalized recommendation models. What to do in this case? “The company then becomes its own supplier for the models in question and will de facto have to comply with the rules concerning publishers for all these extensions”, specifies Patricia Chemali-Noël. “It’s up to her to estimate whether the game is worth the candle or if she prefers, on the contrary, to offload the responsibility to a third-party supplier on the market.”
If the machine learning models in question use, via an API, data from a second application in addition to those of the main HR platform to carry out their learning, three managers will then come into play: the two software publishers and the user/provider of these own models. “Hence the importance of creating a map of all the bricks in the loop and of checking their presence in the EU register for those sold on the market”, recommends Patricia Chemali-Noël.
Similarly, the company will become its own supplier (within the meaning of the AI Act) if it decides to make an HR application a group solution, then resold to all of its subsidiaries. She will again assume the responsibility of the publisher. In addition to the adequacy of the solution with its needs, it will have to carefully validate its compliance with the regulations before launching (see the check list above).
We would like to say thanks to the author of this article for this incredible content
European AI Act: towards a major impact for HR
We have our social media profiles here and other related pages herehttps://www.ai-magazine.com/related-pages/