With AI RMF, NIST tackles the risks of artificial intelligence Computerworld

The National Institute of Standards and Technology (NIST) is inviting comments on its framework related to risk management applied to artificial intelligence. The latter may well have far-reaching implications for both the private and public sectors.

Automation of activities to improve operational efficiency, redesign of purchase recommendations, credit approval, image processing, predictive monitoring, and much more: the adoption by enterprises and public organizations of intelligence applications diverse and varied artificial intelligence (AI), increasing rapidly. However, like any digital technology, AI is not free of security flaws. Moreover, it raises new questions about privacy, bias, inequality and safety. This is why, in order to better manage the risks associated with AI, the National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, has launched, on a voluntary basis, in developing a framework called Artificial Intelligence Risk Management Framework (AI RMF). Its goal: to improve the ability to integrate reliability considerations into the design, development, use and evaluation of AI products, services and systems.

The first draft of the framework is based on a discussion paper published in December 2021 by NIST. The agency hopes that the RMF AI framework can show how the risks of AI-based systems differ from those in other fields, and that it will encourage and serve as a tool for the many AI stakeholders to that they address these risks in a targeted manner. According to NIST, AI RMF can be used to inventory points of compliance beyond those addressed by the framework, including existing regulations, laws, or other mandatory guidelines. Even though AI is subject to the same risks covered by other NIST frameworks, some gaps or risk concerns are unique to AI. It is these gaps that the AI ​​RMF framework aims to fill.

A three-class taxonomy of AI taken into account

NIST has identified four groups that may be affected by this framework: AI system stakeholders, AI system operators and evaluators, external stakeholders, and the general public. NIST uses a taxonomy with three classes of characteristics to consider in overall approaches to identifying and managing risks in AI systems: technical characteristics, socio-technical characteristics, and guiding principles.

Technical characteristics refer to factors directly dependent on the designers and developers of AI systems, and which are measurable using standard evaluation criteria, such as accuracy, reliability and resilience. Socio-technical characteristics refer to how AI systems are used and perceived in individual, collective and societal contexts, such as “explainability”, privacy, security and bias management. Finally, in the AI ​​RMF framework taxonomy, guiding principles refer to broader societal norms and values ​​related to social priorities like fairness, accountability, and transparency.

AI Risk Mapping: The Importance of Context

Like other NIST frameworks, the Core AI RMF contains three elements by which AI risk management activities are organized: functions, categories, and subcategories. Functions are organized to map, measure, manage and govern AI-related risks. Although the AI ​​RMF framework plans to provide context for specific use cases using profiles, this task, along with a how-to guide in the works, has been deferred to future releases. After the release of the draft framework in mid-March, NIST organized a three-day workshop to discuss all aspects of the AI ​​RMF framework, including further work on mitigating harmful biases in AI technologies.

To map AI risks, “it is necessary to determine the context, the use case and the deployment scenario,” said Rayid Ghani of Carnegie Mellon University during the workshop. “In an ideal world, all of these things should have happened when the system was built,” he added. In this regard, Marilyn Zigmund Luke, vice president of America’s Health Insurance Plans, told the participants that “given the variety of contexts and constructions, the risk will obviously be different for the individual and the company”. Adding that “to understand all this in terms of risk assessment, you have to start at the beginning, then develop different parameters”.

Measurement of AI activities: the need for new techniques

The measurement of AI-related activities is still in its infancy due to the complexity of the ethics and socio-political mores inherent in AI systems. David Danks of the University of California, San Diego, said “there are many aspects to measurement that are essentially delegated to the human.” Adding: “What does bias mean in this particular context? What are the relevant values? Because risk fundamentally threatens human or corporate values, and values ​​are difficult to specify formally”.

On the same topic, Jack Clark, co-founder of AI security and research firm Anthropic, said the advent of AI has created a need for new indicators and metrics, and that ideally they should be integrated into the AI ​​technology itself. “One of the challenges of modern AI is that we have to design new measurement techniques alongside the technology itself,” Clark said.

AI Risk Management: Improving Training Data

The management function of the AI ​​RMF framework being developed by NIST addresses risks that have been mapped and measured to maximize benefits and minimize negative impacts. “However, data quality issues can hamper AI risk management,” said Jiahao Chen, chief technology officer at Parity AI. “The availability of the data presented to us to train models is not necessarily generalizable to the real-world level, as it may be largely outdated. One has to wonder if the training data really reflects the state of the world as it is today,” he added.

For Grace Yee, Director of Ethical Innovation at Adobe, “Companies can no longer be satisfied with providing the best technologies in the world to create digital experiences”. Adding, “We need to ensure that our technology is designed for inclusion and respects our customers, communities, and Adobe values. Specifically, we are developing new systems and processes to assess whether our AI is creating harmful biases.” Vincent Southerland of New York University School of Law cited the use of predictive surveillance tools as an example of what can go wrong in managing AI. “These tools are deployed throughout the criminal justice system,” he said, from identifying the perpetrator to the possible release date of the wrongdoer. But only recently has there been recognition “that the data these tools rely on and the way they work contribute to exacerbating racial inequities and prejudices within the criminal justice system itself.”

Little or no AI governance

Few companies have adopted AI governance policies. Patrick Hall, data analyst at bnh.ai, said “outside of large consumer finance companies and a few other highly regulated industries, AI was being used without formal governance guidelines, so companies are left to their own devices to settle these contentious governance issues”. For Natasha Crampton, chief AI officer at Microsoft, “disruption occurs when its approach to governance is too decentralized”. Adding, “It’s a situation where teams want to deploy AI models in production, and they just adopt their own processes and structures, and there’s little coordination.”

For his part, Agus Sudjianto, executive vice president and head of the corporate risk model at Wells Fargo, also insisted on the need to involve high-level executives in the governance of AI risk. “It won’t work if the head of AI or the head of management doesn’t have the stature, listening and support of senior executives,” he said. Teresa Tung, chief cloud-first technologist at Accenture, said all companies should focus their efforts on AI. “Nearly half of Global 2000 companies mentioned AI in their earnings release. This is an area that every company should be aware of”. Like other risk management frameworks developed by NIST, as is the case of the cybersecurity framework, the final version of the AI ​​RMF framework could have significant implications for the private and public sectors. NIST expects feedback on the current version of the Artificial Intelligence Risk Management Framework by April 29, 2022.

We want to give thanks to the author of this short article for this outstanding content

With AI RMF, NIST tackles the risks of artificial intelligence Computerworld


Explore our social media profiles as well as other pages related to themhttps://www.ai-magazine.com/related-pages/