← All Modules
MODULE 01 · ~3 hrs

NIST AI Risk Management Framework

Comprehensive coverage of the NIST AI RMF 1.0 (January 2023) — the U.S. government's voluntary framework for managing AI risks across the AI lifecycle. Covers all four core functions: Govern, Map, Measure, and Manage.

1.1 — Overview and Purpose of NIST AI RMF

The NIST AI Risk Management Framework (AI RMF 1.0) was released in January 2023 by the National Institute of Standards and Technology. It provides a structured, flexible approach for organizations to manage risks associated with AI systems throughout their lifecycle.

Unlike prescriptive regulations, the AI RMF is a voluntary framework designed to be adopted by organizations of any size, sector, or jurisdiction. It complements existing risk management practices and can be mapped to other frameworks like ISO 42001 and the EU AI Act.

Voluntary, Not Mandatory

The NIST AI RMF is voluntary — it is not a law or regulation. However, it has become a de facto standard referenced by U.S. executive orders and is increasingly expected by regulators and customers. Know the distinction between 'voluntary framework' and 'mandatory regulation' for exam questions.

Key NIST AI Publications
Jan 2023
AI RMF 1.0 Released
The foundational voluntary framework for managing AI risks across the lifecycle.
Jan 2023
AI RMF Playbook Published
Companion resource providing suggested actions for each subcategory of the Core.
Oct 2023
Executive Order 14110
U.S. Executive Order on Safe, Secure, and Trustworthy AI references the NIST AI RMF extensively.
Jul 2024
Generative AI Profile (NIST AI 600-1)
Companion profile mapping 12 generative AI risk categories to the four core functions.

The framework defines 'trustworthy AI' across seven characteristics. These characteristics are not independent — they interact and may require trade-offs. For example, increasing explainability may reduce accuracy in certain model architectures. The framework emphasizes that organizations must balance these characteristics based on context.

Seven Characteristics of Trustworthy AI
  1. Valid and Reliable — AI systems perform as intended and consistently produce accurate outputs under expected and challenging conditions.
  2. Safe — AI systems do not endanger human life, health, property, or the environment under defined conditions of use.
  3. Secure and Resilient — AI systems withstand adversarial attacks, unexpected inputs, and environmental changes while maintaining confidentiality and integrity.
  4. Accountable and Transparent — Clear accountability structures exist, and information about the AI system is available to appropriate stakeholders.
  5. Explainable and Interpretable — Stakeholders can understand AI system outputs (explainability) and the mechanisms producing them (interpretability).
  6. Privacy-Enhanced — AI systems protect privacy through design, data minimization, and appropriate handling of personal and sensitive information.
  7. Fair — with Harmful Bias Managed — AI systems are designed to identify, assess, and mitigate harmful biases across the lifecycle.

The AI RMF is structured around two main parts: Part 1 discusses foundational information about AI risks and trustworthiness, while Part 2 introduces the Core — a set of four functions, categories, and subcategories that guide risk management activities. The Core is hierarchical: Functions contain Categories, which contain Subcategories, each with suggested actions documented in the companion Playbook.

Key Points
Released January 2023 — voluntary, not mandatory
Seven characteristics of trustworthy AI (know all seven)
Two parts: Foundational concepts + Core functions
Designed to complement (not replace) existing frameworks
Applicable across sectors, sizes, and jurisdictions
Referenced by Executive Order 14110 (Oct 2023)

1.2 — The Four Core Functions

The NIST AI RMF Core consists of four functions that provide a structure for managing AI risks. These functions are not sequential — they operate concurrently and iteratively throughout the AI system lifecycle. The GOVERN function is cross-cutting, meaning it underpins and informs all other functions.

AI RMF Core Functions Cycle
GOVERN
Cross-cutting function: policies, processes, accountability structures
MAP
Context and framing: understand the AI system, stakeholders, and risks
MEASURE
Analyze, assess, benchmark, and monitor AI risks and impacts
MANAGE
Allocate resources, respond to, recover from, and communicate about risks

GOVERN — Organizational Policies and Culture

GOVERN establishes and maintains the organizational policies, processes, procedures, and structures for AI risk management. This is the cross-cutting function that informs and is informed by the other three. Key activities include defining roles and responsibilities, establishing risk tolerances, creating accountability structures, and fostering an organizational culture of responsible AI. GOVERN has 6 categories (GV-1 through GV-6) covering policies, accountability, workforce diversity, organizational culture, and stakeholder engagement.

MAP — Context and Risk Framing

MAP creates context for framing risks related to an AI system. This function helps organizations understand the AI system's purpose, its users, the operational environment, and the potential impacts. Mapping involves identifying and classifying the AI system, understanding its intended and unintended uses, documenting data provenance, and assessing the legal and regulatory landscape. MAP has 5 categories covering system context, requirements, benefits/costs, and risk identification.

MEASURE — Analysis and Monitoring

MEASURE employs quantitative, qualitative, or mixed methods to analyze, assess, benchmark, and monitor AI risks and their impacts. This includes selecting and applying appropriate metrics, conducting testing and evaluation (including adversarial testing and red-teaming), tracking identified risks over time, and comparing system performance against established benchmarks. MEASURE has 4 categories covering metrics selection, evaluation, and tracking.

MANAGE — Response and Communication

MANAGE allocates risk resources and implements plans to respond to, recover from, and communicate about AI risks. This function includes prioritizing risks based on impact and likelihood, implementing mitigation strategies, establishing incident response procedures, and continuously monitoring the effectiveness of risk treatments. MANAGE has 4 categories covering risk prioritization, treatment, and communication.

Core Functions at a Glance
Function
Purpose
Key Activities
GOVERN
Establish culture, policies, and accountability for AI risk management
Define roles, risk tolerances, organizational culture, workforce diversity
MAP
Contextualize AI system risks by understanding purpose, stakeholders, and environment
Classify AI systems, document data provenance, identify intended/unintended uses
MEASURE
Quantify, assess, and benchmark AI risks using appropriate metrics
Select metrics, conduct red-teaming, track risks over time, adversarial testing
MANAGE
Prioritize and treat risks; implement incident response and communication
Implement mitigation, incident response playbooks, risk communication, monitoring
Function Relationships

GOVERN is the only cross-cutting function — it is NOT part of the MAP→MEASURE→MANAGE sequence. Exam questions often test whether you understand that GOVERN underpins all other functions. Also remember: the functions are concurrent and iterative, not sequential or one-time activities.

Key Points
GOVERN is cross-cutting — informs all other functions
MAP establishes context before measurement
MEASURE uses quantitative and qualitative methods
MANAGE implements responses and monitors effectiveness
Functions are not sequential — they operate concurrently
GOVERN has 6 categories; MAP has 5; MEASURE has 4; MANAGE has 4

1.3 — AI RMF Profiles and Use Cases

NIST AI RMF Profiles are implementations of the framework tailored to specific use cases, sectors, or applications. Profiles help organizations prioritize which parts of the Core to implement based on their specific context. A Profile is essentially a selection and prioritization of the Core's subcategories relevant to a particular scenario.

A Generative AI Profile (NIST AI 600-1) was released in July 2024, addressing risks unique to foundation models and generative AI systems, including hallucinations, data poisoning, prompt injection, CSAM generation, confabulation, environmental costs, and intellectual property concerns. This profile maps 12 unique risk categories to the four core functions.

Generative AI Profile — 12 Risk Categories
#Risk CategoryDescription
1CBRN InformationRisk of AI generating chemical, biological, radiological, or nuclear weapons information
2ConfabulationAI generating false information presented as fact (hallucinations)
3Data PrivacyExposure or misuse of personal/sensitive data during training or inference
4Environmental ImpactEnergy consumption, carbon footprint, and resource usage of large models
5Harmful Bias / HomogenizationAmplification of societal biases; reduction of information diversity
6Human-AI ConfigurationRisks from improper human-AI interaction design (over-reliance, automation bias)
7Information IntegrityRisks to the broader information ecosystem (deepfakes, misinformation at scale)
8Information SecurityPrompt injection, data poisoning, model extraction, adversarial attacks
9Intellectual PropertyTraining on copyrighted data; generating infringing content
10Obscene / Degrading ContentGeneration of CSAM, non-consensual intimate imagery, or degrading material
11Toxic ContentGeneration of hate speech, violent content, or discriminatory language
12Value Chain / Component IntegrationRisks from third-party models, APIs, datasets, and supply chain dependencies
Know All 12 Categories

Exam questions frequently ask you to identify or categorize risks according to the Generative AI Profile. Memorize all 12 categories. A common trick: 'hallucination' is officially called 'confabulation' in the NIST GenAI Profile. Also note that 'information security' covers prompt injection — a frequently tested topic.

Organizations can create Current Profiles (documenting existing practices) and Target Profiles (desired future state) to identify gaps and plan improvements. The gap analysis between Current and Target Profiles drives the risk management roadmap.

Creating and Using AI RMF Profiles
01
1. Build a Current Profile

Document which subcategories of the Core your organization currently addresses, and to what extent. Assess the maturity of each practice against the Playbook's suggested actions.

02
2. Conduct Gap Analysis

Compare current practices against the full set of Core subcategories relevant to your use case. Identify areas where practices are absent, informal, or insufficient. Prioritize gaps by risk impact.

03
3. Define the Target Profile

Select and prioritize the Core subcategories you want to achieve. Set specific, measurable targets for each subcategory. Align targets with organizational risk tolerances, regulatory requirements, and stakeholder expectations.

04
4. Create a Roadmap

Develop an actionable plan to close gaps between Current and Target Profiles. Assign ownership, allocate resources, set timelines, and establish milestones. Review and update the roadmap regularly.

Key Points
Profiles customize the framework for specific contexts
Generative AI Profile (NIST AI 600-1, July 2024) covers 12 unique risk categories
Current vs Target Profiles identify improvement gaps
Profiles map to all four Core functions
NIST uses 'confabulation' not 'hallucination'
Gap analysis drives the risk management roadmap

1.4 — Implementing AI RMF in Practice

Implementing the NIST AI RMF requires a phased, practical approach that begins with organizational governance and progressively integrates risk management into every stage of the AI lifecycle. Success depends on executive sponsorship, cross-functional collaboration, and continuous iteration.

Implementation Steps
01
1. Establish Governance Structure

Assign an AI risk management lead or committee. Form cross-functional teams spanning technical, legal, ethics, and business stakeholders. Define risk tolerance thresholds and decision-making authority. Secure executive buy-in and budget.

02
2. Build AI System Inventory

Document every AI system in production and development: purpose, data sources, model type, deployment method, affected stakeholders, and regulatory obligations. Classify systems by risk level. This inventory is the foundation for MAP activities.

03
3. Define Baseline Metrics

Before deployment, establish baseline performance metrics including accuracy, fairness across demographic groups, robustness to adversarial inputs, explainability scores, and resource consumption. Define acceptable thresholds for each metric.

04
4. Implement Continuous Monitoring

Deploy monitoring systems to track model performance, data drift, fairness metrics, and security events post-deployment. Set up automated alerts for threshold violations. Conduct periodic re-evaluation against baselines.

05
5. Create Incident Response Playbooks

Develop AI-specific incident response procedures covering model failures, biased outputs, security breaches, and compliance violations. Define escalation paths, rollback procedures, and communication protocols. Test playbooks regularly.

06
6. Conduct Regular Audits

Schedule third-party audits for independent validation of risk management practices. Internal audits should verify compliance with organizational policies and the AI RMF. Document findings and track remediation.

Implementation Flow
Governance Setup
Roles, policies, risk tolerances
AI Inventory
Catalog all AI systems
Baseline Metrics
Pre-deployment benchmarks
Monitoring
Continuous post-deployment tracking
Incident Response
Playbooks for AI-specific failures
Audit
Internal and third-party validation
Implementation Priority

When asked about implementation order, always start with GOVERN (governance setup). You cannot effectively MAP, MEASURE, or MANAGE risks without organizational structures, policies, and accountability in place first. Also remember: third-party audits provide independent validation — they do not replace internal monitoring.

For MANAGE activities, organizations should create incident response playbooks specifically for AI failures — these differ from traditional IT incident response. AI incidents may involve gradual degradation (model drift), biased outputs affecting specific demographic groups, or adversarial manipulation. Rollback procedures must account for model versioning, data pipeline states, and downstream system dependencies.

Key Points
Start with governance: roles, teams, risk tolerances
Maintain a comprehensive AI system inventory
Baseline metrics before deployment, continuous monitoring after
Incident response playbooks for AI-specific failures
Third-party audits for independent validation
Implementation is iterative, not one-time
// Practice Questions
Q1: What are the four core functions of the NIST AI RMF, and which one is considered cross-cutting?
Show Answer

Govern, Map, Measure, and Manage. Govern is the cross-cutting function that informs and is informed by the other three. It establishes the organizational policies, roles, accountability structures, and culture necessary for effective AI risk management.

Q2: Name all seven characteristics of trustworthy AI as defined by the NIST AI RMF.
Show Answer

1) Valid and Reliable, 2) Safe, 3) Secure and Resilient, 4) Accountable and Transparent, 5) Explainable and Interpretable, 6) Privacy-Enhanced, and 7) Fair — with Harmful Bias Managed. These characteristics are interrelated and may involve trade-offs.

Q3: What is the purpose of AI RMF Profiles, and how do Current and Target Profiles work together?
Show Answer

Profiles are tailored implementations of the framework for specific use cases, sectors, or applications. They help organizations prioritize which parts of the Core to implement. Current Profiles document existing practices; Target Profiles define the desired future state. The gap analysis between them drives the risk management improvement roadmap.

Q4: How does the Generative AI Profile (NIST AI 600-1) differ from the base AI RMF? Name at least six of its risk categories.
Show Answer

The Generative AI Profile (July 2024) addresses 12 unique risk categories specific to foundation models and generative AI: CBRN information, confabulation, data privacy, environmental impact, harmful bias/homogenization, human-AI configuration, information integrity, information security, intellectual property, obscene/degrading content, toxic content, and value chain/component integration.

Q5: What is the difference between 'confabulation' and 'hallucination' in the context of the NIST GenAI Profile?
Show Answer

They refer to the same phenomenon — AI generating false information presented as fact. However, NIST officially uses the term 'confabulation' in the Generative AI Profile rather than the colloquial term 'hallucination.' Exam questions may test this terminology distinction.

Q6: Explain the relationship between the NIST AI RMF and U.S. Executive Order 14110. Is the framework mandatory?
Show Answer

Executive Order 14110 (October 2023) on Safe, Secure, and Trustworthy AI references the NIST AI RMF extensively and directs federal agencies to use it. However, the AI RMF itself remains voluntary for private sector organizations. The EO effectively makes it a de facto standard for government AI systems while encouraging broader adoption.

Q7: What should be the first step when implementing the NIST AI RMF, and why?
Show Answer

The first step is establishing governance structures (the GOVERN function): assigning an AI risk management lead, forming cross-functional teams, defining risk tolerances, and securing executive buy-in. This comes first because you cannot effectively MAP, MEASURE, or MANAGE risks without organizational policies, accountability, and resources in place.

Q8: How does the MEASURE function differ from traditional software testing? What specific AI testing methods does it include?
Show Answer

MEASURE goes beyond traditional functional testing to include AI-specific methods: adversarial testing (red-teaming), fairness testing across demographic groups, robustness testing against distribution shift, explainability scoring, and continuous post-deployment monitoring for model drift. It uses quantitative, qualitative, or mixed methods and tracks risks over time rather than being a one-time pass/fail check.

02. ISO/IEC 42001 — AI Management System