Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Background

NIST is a leading U.S. standards setting institution affiliated with the U.S. Department of Commerce.
An important part of its work is the development and maintenance of standards, guidelines and best
practices for multiple technological and scientific sectors. As an example, NIST published in 2014 the
first version of its Cybersecurity Framework, similar in many aspects to ISO’s 27001 Information
Security Management standard.

In 2021, the U.S. Congress gave NIST the mission to develop a “voluntary risk management
framework for trustworthy artificial intelligence systems”. In particular, the U.S. Congress requested
that this framework include best practices and voluntary standards on how to develop and assess
trustworthy AI Systems, mitigate potential risks, establish common definitions of explainability,
transparency, security, fairness, etc. and remain technology-neutral.

The AI RMF’s development began shortly thereafter, with an initial request for information in July
2021. NIST published a first draft of the AI RMF in March 17 2022, then a second in August of the
same year. All throughout this process NIST received comments and feedback from a broad set of
stakeholders from academia, civil society and the private and public sectors, including Microsoft,
Google, NASA, IBM and the National Artificial Intelligence Institute. This degree of participation from
multiple leading industry participants may favour its adoption by AI Actors. The fact that the AI RMF
is intended to be a living document that will evolve over time with the benefit of further input based
on lessons learned, may also contribute to its success.

In parallel to the publication of the AI RMF, NIST presented a Roadmap that sets out further actions
it intends to take to improve and expand the framework, including the alignment of the framework
with other standards (mainly ISO’s IT and AI standards, some of which are still under development).

Structure of the Framework

The AI RMF is divided in two main parts. Part 1 (Foundational Information) focuses on providing
guidance on how to assess AI risks and measure trustworthiness while Part 2 (Core and Profiles)
details the core of the framework and its four main functions: Govern, Map, Measure and Manage.
Its main goal is to assist organizations in managing risks, both to the enterprise and society at large,
that can emerge from the use of AI systems and, ultimately, cultivate trustworthiness.

As a companion to the AI RMF, NIST also published an online Playbook, which was built to help
organizations use and implement the main framework. The Playbook is a platform which expands on
the four core functions with detailed explanations, suggested actions and recommended resources
for every step of the AI RMF Core.

The AI Risk Management Framework

Part 1 – Foundational Information

As a practical guide, the AI RMF first focuses on the unique risks brought by AI Systems.
Organizations have been managing cyber and computer risks for several decades, but AI Systems
(such as systems that can operate autonomously on our roads, that create novel art pieces based on
databases of human creations or that make hiring, financing or policing recommendations with
limited human inputs) introduce a new range of risks.
Rather than proposing a list of specific risks that may be rapidly outdated or nor not apply to all AI
Systems, NIST identifies AI-specific challenges that organizations should keep in mind when
developing their own risk management approach:

• Risk Measurement: AI risks can be difficult to precisely measure quantitatively or


qualitatively. This is due in part to the fact that many organizations are dependent on
external service providers to respond to their AI needs, which can create alignment and
transparency issues. This is compounded by the current absence of generally accepted
methods to measure AI risks. Some other risk measurement challenges include: tracking
emergent risks, measuring risk in real-world settings (rather than in controlled settings), and
the inscrutability of the algorithms (lack of explainability).
• Risk Tolerance: Tolerance to risk will vary from one organization to another, and from one
use-case to another. Establishing risk tolerance will require taking into account multiple
evolving factors, including the unique characteristics of each organization, their
legal/regulatory environments and the broader social context in which they operate.
• Risk Prioritization: Organizations using AI Systems will be faced with the challenge of
efficiently triaging risks. NIST recommends an approach where the highest risks are
prioritized and where organizations stop using AI Systems that present “unacceptable
negative risk levels”. Once again, this evaluation will be contextual. For example, initial risk
prioritization may be higher for systems interacting directly with humans.
• Organizational Integration and Management of Risk: AI risk management should not be
considered in isolation from the broader enterprise risk strategy. The AI RMF should be
integrated within the organization’s existing risk governance processes so that AI will be
treated along with other critical risks to create a more integrated outcome.

NIST’s focus on risk is in line with current legislative proposals as they have appeared in Canada and
Europe: draft regulation currently being considered in those jurisdiction both take a risk-based
approach where the development or use of so-called “high-risk” AI Systems would be subject to
specific governance and transparency requirements. For example, the Canadian draft Artificial
Intelligence and Data Act (“AIDA”)) proposes that any person responsible for an AI System should
have the obligation to assess whether the system is a “high-impact system” based on criteria that
will be set out in regulations.

As such, assessing risk, and especially putting in place workflows to identify and manage high-risk AI
Systems, may soon become legally mandated. For organizations wanting to stay ahead of the curve,
the AI RMF represents a good starting point. In section 2 of Part 1 of its Framework, NIST further
details its risk management approach by proposing a general TEVV framework (for: test, evaluation,
verification and validation), inspired by work done by the OECD, that identifies recommended risk
management activities for the different stages of the AI lifecycle. The following figures are taken
directly from the AI RMF:

We can find in NIST’s approach certain similarities with privacy-by-design approaches, in particular
as regard to the use of impact assessments. Privacy impact assessments (“PIAs”) have indeed
become a staple of privacy compliance, including in Quebec where Law 25, starting September 2023,
will require companies to conduct PIAs when conducting new projects involving information systems
that can process personal information. And if adopted in its current form, AIDA would add the
obligation for all organization responsible for an AI System to conduct an assessment to determine if
the system is “high-impact”.

Ultimately, the goal of an effective AI-risk management process should be not only to reduce risk for
the organization, but also to encourage “trust” by those who adopt the technology. “Trust”,
“trustworthy AI” are recurrent motifs of existing principle-based approaches, including Montréal
Declaration for Responsible Development of Artificial Intelligence and iTechLaw’s Responsible AI: A
Global Policy Framework (which some authors of this blog have contributed to develop). In a manner
consistent with such frameworks, AI RMF defines trust as the cornerstone of AI risk management.
From the outset, the AI impact assessment should evaluate if AI technology is an appropriate or
necessary tool for the task at hand, which should consider the trustworthiness characteristics in
conjunction with the relevant risks, impacts, costs, and benefits with contributions by a variety of
stakeholders. The AI RMF describes seven characteristics of a trustworthy AI, all of which should be
considered and treated holistically when deploying an AI System.

1. Valid and Reliable


2. Safe
3. Secure and Resilient
4. Accountable and Transparent
5. Explainable and Interpretable
6. Privacy-Enhanced
7. Fair - with Harmful Bias Managed

NIST places particular emphasis on the “Accountable and Transparent” characteristic as it underpins
all the others. Transparency notably leads to greater accountability by allowing any person who
interacts with an AI System to have a better understanding of its behaviour and limitations. NIST
provides no detailed guidance on how to favour such transparency, but we note that the Institute of
Electrical and Electronics Engineers recently made freely available some of its standards on AI Ethics
and Governance, including is Standard for Transparency of Autonomous Systems which provides
specific guidance on the topic. Transparency of AI Systems is a key challenge as their underlying
algorithms have the reputations of being “black-boxes” and producing outputs that is not easily
explainable.

Part 2

Part 2 of the AI RMF is what NIST calls the AI RMF Core, a framework of actions and desired
outcomes in developing responsible and trustworthy AI Systems. It is composed of four functions to
maximize benefit and minimize risk in AI outcomes and activities:

• The Govern function focuses on the implementation of policies and procedures related to
the mapping, measuring, and managing of AI. It has a focus on “people”, emphasizing
workplace diversity, inclusion and a strong risk mitigation culture.
• The Map function highlights the systematic understanding and categorization of the AI’s
performance, capabilities, goals and impacts.
• The Measure function employs quantitative, qualitative, or mixed-method tools, techniques,
and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.
• The Manage function entails allocating risk resources to map and measure risks on a regular
basis and as defined by the Govern function.

For each function, NIST provides several categories and subcategories of operational tools for
implementation. For example, the first category of the Govern function relates to ensuring that
“Policies, processes, procedures, and practices across the organization related to the mapping,
measuring, and managing of AI risk are in place, transparent, and implemented effectively”. This
category is further divided in seven subcategories that suggest different actions to fulfill the purpose
of the function. We will not go over each here, but we would recommend anyone working to
develop an AI governance structure within an organization to look at the AI RMF, and in particular
the interactive Playbook (which comes with additional details), for guidance that can then be
adapted to the context.

Takeaways

Although we remain at an early stage in the development of AI standards and laws, the advent and
popularity of generative AI Systems like ChatGPT and StabilityAI (which have brought both the risk
and rewards of AI into focus, the need for an efficient and adaptable risk management framework
becomes more and more evident by the day.

The AI RMF is an important step in the right direction, helping to fill the current guidance gap on
how best to manage AI risk. While compliance with the AI RMF is evidently not mandatory,
implementing its recommendations is a proactive way that organizations can get ahead of the curve
as regards regulatory compliance requirements that are already coming into focus. In this sense,
implementation of the AI RMF will serve as a useful step toward the responsible deployment of AI
Systems “by design”.

You might also like