Skip to main content

NIST’s AI Risk Management Framework Should Address Key Societal-Scale Risks

Brandie Nonnecke, Founding Director, CITRIS Policy Lab | October 26, 2021

Authored by a group of scholars from the University of California, Berkeley: Anthony Barrett, Thomas Krendl Gilbert, Jessica Newman, Brandie Nonnecke, and Ifejesu Ogunleye.

In September 2021, the UN High Commissioner for Human Rights, Michelle Bachelet, called for a moratorium on the sale and use of artificial intelligence (AI) systems that “pose a serious risk to human rights until adequate safeguards are put in place.” Bachelet said:

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”

The question of how to manage such consequential AI risks is timely, as the National Institute of Standards and Technology (NIST), an agency within the United States Department of Commerce, is in the process of developing an AI Risk Management Framework (AI RMF). As part of developing this framework, NIST is reaching out to the public for input, including by issuing requests for information (RFIs) and convening workshops.

Organizations sometimes focus their risk management efforts primarily on potential risks that may affect them, such as financial risks. Unfortunately, this approach can neglect key types of broader risks to society.

The purpose of the NIST AI RMF is to serve as a voluntary guide that companies and other organizations can follow to improve how they manage risks that could result from their creation or use of AI systems. In doing so, NIST seeks to improve the trustworthiness of AI systems and help to advance the United States’ capabilities in the global AI market.¹

NIST has already signaled that the AI RMF will include societal risks, which is important and valuable, given the significant scale and scope of AI impacts, but it has not yet defined the specific types of societal risks it will address. As representatives of leading AI initiatives across the University of California, Berkeley campus,² we see NIST’s RFI on the AI RMF as an opportunity to take stock and provide recommendations on next steps for the development of responsible AI.

We provided a comment to NIST, which we frame and summarize below.

Summary Recommendations for the NIST AI Risk Management Framework

There are significant challenges associated with developing an effective AI risk management framework. Inconsistent terminology, uncertainty about future developments, and disciplinary divisions limit our capacity to make sense of the risks posed by AI today, and to accurately predict and prepare for future risks.

Many AI systems with controversial risks are already in use, including cars with self-driving capabilities, automated decision systems, and human-level language models. Much as automobiles, airplanes, and the internet once seemed far-fetched, even as their technical capabilities were within reach, these applications will become so commonplace and thoroughly integrated into daily behavior that it will be difficult to imagine social life without them.

The AI RMF is an important opportunity to develop risk assessment and mitigation measures that can help prevent the harms people are experiencing from AI systems today and prevent potential future harms from materializing. The AI RMF should integrate a diversity of lenses on AI risks, from the potential worsening of inequalities, to the pressing gaps in meaningful transparency and explainability, to the harms that may emerge from formal (mathematical/statistical) models that have been misspecified with respect to human values and goals.

In our submission to NIST, we highlight the irreducibly sociotechnical contexts in which risks are likely to manifest and be experienced by stakeholders. By “sociotechnical,” we refer to the variety of interfaces between emerging AI systems and the legacy systems (political, cultural, cognitive, economic) with which formal AI models will interact.

We focus on three broad categories of risks: to democracy and security, to human rights and well-being, and of global catastrophes. Although many real-world examples of risks may fit into more than one of those categories, we emphasize these categories for their crucial analytical distinctions and independent importance for ensuring that the future development of AI systems remains safe and commensurate with human priorities.

While prior work has argued for treating each type of risk seriously and urgently, these risks — however unlikely or difficult to imagine today — are likely to reverberate and exacerbate each other unless we properly address and mitigate them. Put differently, we cannot appropriately prepare for or mitigate any of these risks unless due attention is paid to all of them.

This entails active monitoring and proactive mechanisms to prevent their manifestation and mutual effects. Consequently, the gap we aimed to fill with our submission to NIST is the identification of policy strategies, institutional mechanisms, and technical interventions that address the intersection of these risks, with emphasis on themes that cut across the particular dangers or warnings articulated by AI theorists, computer scientists, policymakers, and stakeholder advocates.

The full list and explanation of our recommendations can be found in our submission here. Below, we summarize several key points:

▹ We recommend that NIST keep focus on and delineate the meaning of societal-scale risks, to include:

    • Risks to democracy and security, such as polarization, extremism, mis- and disinformation, and social manipulation;
    • Risks to human rights and wellbeing, including equity, environmental, and public health risks; and
    • Global catastrophic risks, including risks to large numbers of people caused by AI accidents, misuse, or unintended impacts in both the near and long terms.

▹ We recommend that NIST treat intended use cases as necessary, but not sufficient, to assess AI risks.

    • We recommend that the AI RMF include clear, usable guidance on identifying and assessing risks of AI, yielding risk management strategies that would be robust, despite high uncertainty about future potential uses and misuses beyond the AI designers’ originally intended or planned uses.

▹ We recommend that NIST maintain close relationships with researchers in key fields (including AI safety and security; capabilities; and the social sciences, such as science and technology studies) to follow shifts across these fields and potential impact on the AI RMF, and that NIST update corresponding components of the AI RMF as needed.

▹ We recommend that NIST consider “assessment of generality” of an AI system (i.e., assessment of the breadth of AI applicability/adaptability) as another important characteristic affecting the trustworthiness of AI, or perhaps as a factor affecting one or more of the AI trustworthiness characteristics NIST has already outlined.

▹ We recommend that NIST consider having the AI RMF include guidance to have risk identification processes performed by a team that is diverse, multidisciplinary, representative of multiple departments of the organization, as well as inclusive of a correspondingly diverse set of stakeholders from outside the organization.

▹ We recommend that NIST be explicit about how and where the AI RMF will incorporate and coordinate with existing and future AI standards development and risk assessment.

    • We recommend that NIST consider clarifying its planned procedures for making AI RMF updates (e.g., how often, under what conditions, and decision criteria), and how it aims to balance flexibility with standard-setting authority.

▹ We recommend that the AI RMF include a comprehensive set of governance mechanisms to help organizations mitigate identified risks.

    • These should include guidance for determining who should be responsible for implementing the AI RMF within each organization, ongoing monitoring and evaluation mechanisms that protect against evolving risks from continually learning AI systems, support for incident reporting, risk communication, complaint and redress mechanisms, independent auditing, and protection for whistleblowers, among other mechanisms. We also recommend that the AI RMF encourage organizations to consider entirely avoiding AI systems that pose unacceptable risks to rights, values, or safety.

Our organizations remain committed to reducing the risks posed by AI technologies, and we look forward to supporting NIST’s efforts to develop an AI risk management framework that enables the advancement of the United States’ AI efforts and improves the trustworthiness of AI systems.³

— Anthony Barrett, Ph.D., PMP, Non-Resident Research Fellow, AI Security Initiative, Center for Long-Term Cybersecurity, UC Berkeley

— Thomas Krendl Gilbert, Ph.D., Research Affiliate, Center for Human-Compatible AI, UC Berkeley

— Jessica Newman, Program Lead, AI Security Initiative, Center for Long-Term Cybersecurity, UC Berkeley

 Brandie Nonnecke, Ph.D., Director, CITRIS Policy Lab, CITRIS and the Banatao Institute, UC Berkeley

 Ifejesu Ogunleye, Graduate Researcher, AI Security Initiative, Center for Long-Term Cybersecurity, UC Berkeley



(2) Contributing organizations are based at UC Berkeley and include the AI Security Initiative of the Center for Long-Term Cybersecurity, the CITRIS Policy Lab, and the Center for Human-Compatible Artificial Intelligence.

(3) For example, we also submitted a response to NIST’s recent draft Special Publication 1270, “A Proposal for Identifying and Managing Bias in Artificial Intelligence”; our submission is listed as comment #51 at