AI: Experts set up a competition to identify discriminating systems

Artificial intelligence (AI) systems are ubiquitous and they work all the time, but it sometimes takes months or even years to understand how they are biased and discriminating. Yet the stakes are often high: Unfair AI systems can lead to the arrest of innocent people and deprive people of access to housing, employment, or basic services.

This Thursday, October 20, 2022, a group of AI and machine learning experts launched a new competition with a bounty system that they hope will speed up the process of uncovering the biases and errors embedded inside these systems. The competition, which is inspired by bug bounties in the field of cybersecurity, invites participants to create tools to identify and mitigate algorithmic biases in AI models.


READ ALSO

European Union: companies held liable for damage caused by AI?

The competition is organized on a voluntary basis by a group of experts working within companies such as Twitter, the software company Spunk and the Reality Defender start-up specializing in the detection of deepfakes. They called themselves the “Bias Buccaneers”.

The first prize competition for detecting bias in AI systems will focus on images. This is a common problem: In the past, for example, faulty image detection systems have mistakenly identified black people as gorillas.

Participants will need to build a machine-learning model that labels each image based on the assumed skin color, gender, and age range of the people in the snaps. This should make it easier to measure and detect biases within datasets. To do this, participants will have access to a dataset comprising approximately 15,000 synthetically generated images of human faces. In particular, they will be classified according to the accuracy with which their model labels the images and the execution time of the code. The contest ends on November 30.

Microsoft and the start-up Robust Intelligence have pledged to pay prizes of $6,000 to the winner, $4,000 to the second and $2,000 to the third. Amazon offered $5,000 to the first group of participants on computing power.

This competition embodies the example of an emerging industry in the field of artificial intelligence: Auditing algorithmic biases. Last year, Twitter launched the first AI bias-checking contest, and Stanford University just wrapped up its first such contest. At the same time, the nonprofit Mozilla Foundation is creating tools for AI auditors.

These audits are set to become increasingly common. They have been hailed by regulators and AI ethics experts as a good way to bring more accountability to AI systems. In some jurisdictions, these audits will become a legal requirement.

The new European law on content moderation, dubbed the Digital Services Act, provides for annual audit obligations of the data and algorithms used by major technology platforms. The future European law on AI could also allow authorities to carry out an audit on AI systems. The US National Institute of Standards and Technology also recommends AI audits as benchmark standards. According to Alex Engler, who studies AI governance at the Brookings Institution think tank, the idea is that these audits act like the inspections we see in other high-risk sectors like chemical plants by example.

However, this will faces a double problem: first, there are not enough independent contractors to meet the future demand for algorithmic audits. And second, companies are reluctant to give them access to their systems. So say AI liability researcher Deborah Raji and her co-authors in an article published last June.

The contests hope precisely to circumvent these problems. The AI ​​community aspires to inspire more engineers, researchers, and experts to acquire the skills and experience necessary to carry out these audits. So far, most limited reviews in the AI ​​world are conducted either by academics or by tech companies themselves. The aim of competitions such as this is to create a new sector of experts specialized in the audit of artificial intelligence.


READ ALSO

Euthanasia: can we let AI have the right to life or death over humans?

“We’re trying to create a third space for people who are interested in this kind of work, for people who want to get started or who are experts who don’t work for big tech groups,” says Rumman Chowdhury, director at Twitter. from the machine learning ethics, transparency and accountability team. She is also the leader of the Bias Buccaneers. “This offer can concern both hackers and data specialists who want to acquire a new skill,” she adds.

At the origin of this contest with bonus system, the Biais Buccaneers hope that this is only the first of a long series.

Competitions like this not only inspire the machine learning community to do audits, but they also advance the common understanding of “how best to conduct audits and what types of audits should be done.” invest,” says Sara Hooker, who runs Cohere for AI, a nonprofit AI research lab.

This effort is “fantastic and absolutely necessary”, insists Abhishek Gupta, founder of the AI ​​Ethics Institute of Montreal, who was a judge during the challenge launched by Stanford University. “The more peers of eyes scrutinizing a system, the more likely we are to find flaws in it,” he says.

Article by Melissa Heikkilä, translated from English by Kozi Pastakia.


READ ALSO

DeepMind’s playful AI broke a 50-year-old record

We want to say thanks to the author of this short article for this awesome content

AI: Experts set up a competition to identify discriminating systems


You can view our social media profiles here , as well as other pages related to them here.https://www.ai-magazine.com/related-pages/