The White House released its first artificial intelligence bill of rights

The AI ​​Bill of Rights could lay the groundwork for future legislation, but at the moment it is not binding. Big Tech can still move freely in this sector.

The virtual universe is increasingly real, and the Artificial Intelligence Bill of Rights is a wake-up call. President Joe Biden introduced a new artificial intelligence bill of rights on October 4. For many it lacks bite, it is dull, non-binding and very fragile compared to the granitic initiatives of the European Union. Still, it’s a first step in trying to bring some order to the fragile anarchy of digital lands.

The Office of Science and Technology Policy (OSTP) released the draft after gathering input from startups, tech companies, human rights think tanks, and even the public. The result is five core principles that aim to promote competition in the technology sector and provide federal privacy safeguards.

The five fundamental principles

First, the right of individuals to know how their data is being used. More and more scandals related to Big Tech have indeed shown how personal information is collected without the knowledge of users, it is not even clear how the huge amount of data will then be used. Next comes the ability to forego automated decision-making in favor of a human alternative and have direct contact with staff in the event of a problem. Finally the algorithms. The Charter puts on paper the possibility of escaping from inefficient or insecure algorithms, and defusing the risks of algorithms that destroy ethyny, sex and religion.

“These technologies are causing real damage to the lives of Americans, damage that goes against our fundamental democratic values, including the fundamental right to privacy, freedom from discrimination and our fundamental dignity,” explained a senior Biden administration official. The great sin of the AI ​​Bill of Rights is that it will not have the force of law. This is a non-binding white paper. It will be a practical guide for government agencies and an invitation to technology companies to follow the principles of the Charter.

Critique of the AI ​​Bill of Rights

Russell Wald, director of policy at the Stanford Institute for Human-Centered AI, explained that the document lacks detail, trivially cannot be enforced. “It is disheartening to see the absence of a coherent federal policy to address the desperately needed challenges posed by AI, such as coordinated federal oversight, auditing and review of actions to mitigate risk. and damage caused by implemented or open source base models. , ” he says. “We would like to see clear bans on AI implementations that have been the most controversial, which include, for example, the use of facial recognition for mass surveillance. »

Annette Zimmermann, an expert in artificial intelligence, justice, and moral philosophy at the University of Wisconsin-Madison, follows. It also insists on the need to enshrine in regulatory frameworks the principles for recognizing and imputing corporate responsibility. In short, the guidelines are fair, but without binding effect, they risk being ineffective.

However, the White House has also announced that initiatives will be put in place to guard against possible damage from the ILL. The Department of Health and Human Services is working on a plan to reduce algorithmic discrimination in healthcare by the end of the year. Indeed, it appeared that certain algorithms gave discriminatory access to care to marginalized groups. Next comes the Ministry of Education, which by 2023 will present a set of recommendations on the use of AI for teaching and learning.

Reactions in the world of technology

OSTP’s AI bill of rights is “impressive,” said Marc Rotenberg, who directs the Center for AI and Digital Policy, a nonprofit that tracks AI policy. “It is clearly a starting point. This does not end the discussion of how the United States implements human-centered and trustworthy AI,” he says. Matt Schruers, chairman of the tech lobby CCIA (among its members Google, Amazon and Uber) appreciated the “guidance”. of “the administration that government agencies should lead by example in the development of ethical principles of AI”.

Shaundra Watson, director of AI policy for the technology lobby BSA (Microsoft and IBM), noted that “it will be important to ensure that these principles are applied in a way that increases protections and reliability in the practice “. A measure enthusiastically welcomed by the tech world. An enthusiasm that a binding regulatory framework might have dampened.

When will the European Union

Europe on the subject, on the other hand, cringes and shows its claws. This time, he is ten steps ahead of the United States. The parliamentarians are indeed considering how to change the law on AI, and to prohibit certain functions such as predictive policing, “violates the presumption of innocence as well as human dignity. A way to protect citizens from the dangerous drift of artificial intelligence even a week has passed since the new bill that allows people affected by AI to file a lawsuit in civil court.

Future prospects

The AI ​​bill of rights could lay the groundwork for future legislation, like passing the Algorithmic Accountability Act or creating an agency to regulate AI, says Sneha Revanur, who leads Encode Justice , an organization that focuses on youth and AI. “Although limited in its ability to address private sector misdeeds, the AI ​​Bill of Rights can deliver on its promise if meaningfully applied,” he says.

We would like to say thanks to the author of this short article for this awesome web content

The White House released its first artificial intelligence bill of rights


Discover our social media profiles and other pages related to it.https://www.ai-magazine.com/related-pages/