← All Modules
MODULE 02 · ~3 hrs

ISO/IEC 42001 — AI Management System

Deep dive into ISO/IEC 42001:2023 — the world's first international standard for AI Management Systems (AIMS). Covers the Plan-Do-Check-Act cycle, Annex controls, certification requirements, and integration with ISO 27001.

2.1 — What is ISO/IEC 42001?

ISO/IEC 42001:2023, published in December 2023, is the world's first international management system standard for Artificial Intelligence. It specifies requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS) within an organization.

AI Management System (AIMS)

An AIMS is a set of interrelated or interacting elements of an organization that establishes policies, objectives, and processes to achieve those objectives in relation to the responsible development, provision, or use of AI systems. It provides the organizational structure for governing AI throughout its lifecycle.

The standard follows the Harmonized Structure (HS) common to all ISO management system standards (like ISO 27001, ISO 9001), making it straightforward to integrate into existing management systems. It uses the Plan-Do-Check-Act (PDCA) cycle as the foundation for continual improvement.

Plan-Do-Check-Act (PDCA) Cycle
PLAN
Establish objectives, policies, and processes. Conduct risk assessments. Define the AIMS scope.
DO
Implement the planned processes. Apply Annex A controls. Manage AI lifecycle activities.
CHECK
Monitor, measure, audit, and review performance. Evaluate AIMS effectiveness.
ACT
Address nonconformities. Take corrective actions. Drive continual improvement.

ISO 42001 is certifiable — organizations can undergo third-party audits to achieve certification, demonstrating to stakeholders, regulators, and customers that they manage AI responsibly. Certification is conducted by accredited certification bodies and is valid for three years with annual surveillance audits.

The standard applies to any organization that provides or uses AI — regardless of size, type, or sector. It covers the entire AI lifecycle from conception through decommissioning.

ISO AI Standards Landscape
Dec 2023
ISO/IEC 42001 Published
First international AI management system standard (certifiable).
2023
ISO/IEC 23894
Guidance on AI risk management — complements 42001 with detailed risk processes.
2022
ISO/IEC 38507
Governance implications of AI — guidance for governing bodies.
2023–2024
ISO/IEC 42005, 42006
AI impact assessment (42005) and requirements for certification bodies auditing AIMS (42006).
Ongoing
ISO/IEC 5338, 5339
AI lifecycle processes (5338) and AI risk taxonomy (under development).
Key Points
First international AI management system standard (December 2023)
Certifiable — third-party audits available
Follows Harmonized Structure (compatible with ISO 27001, 9001)
Uses Plan-Do-Check-Act cycle
Covers full AI lifecycle
Part of a broader family of ISO AI standards

2.2 — Core Clauses (4–10)

ISO 42001 follows the Harmonized Structure (HS), meaning Clauses 4 through 10 mirror the same structure found in ISO 27001, ISO 9001, and other management system standards. This deliberate alignment makes it possible to integrate AIMS with existing management systems without duplicating effort.

ISO 42001 Core Clauses Overview
ClauseTitleKey Requirement
4Context of the OrganizationDetermine internal/external issues, stakeholder needs, AIMS scope, and AI system lifecycle boundaries.
5LeadershipTop management commitment, AI policy establishment, and assignment of roles/responsibilities.
6PlanningAddress risks and opportunities, set AI objectives, conduct AI risk assessment including societal impacts.
7SupportProvide resources, ensure competence (education/training), manage awareness, communication, and documentation.
8OperationImplement AI risk management, conduct AI impact assessments, manage lifecycle, apply Annex A controls.
9Performance EvaluationMonitor, measure, analyze, and evaluate AIMS. Conduct internal audits and management reviews.
10ImprovementAddress nonconformities, take corrective actions, drive continual improvement of the AIMS.

Clause-by-Clause Detail

Clause 4 (Context) requires organizations to understand the internal and external issues relevant to their AI systems, identify stakeholders and their requirements, and define the scope of the AIMS. Organizations must determine which AI systems fall within scope and document the context in which they operate, including applicable regulations and industry standards.

Clause 5 (Leadership) requires top management to demonstrate commitment to the AIMS by establishing an AI policy, assigning roles and responsibilities, and ensuring the AIMS achieves its intended outcomes. The AI policy must be appropriate to the organization's purpose, include commitment to compliance and continual improvement, and be communicated to all relevant parties.

Clause 6 (Planning) requires organizations to address risks and opportunities, set measurable AI objectives, and plan how to achieve them. Critically, the AI risk assessment must consider impacts on individuals, groups, and society — not just organizational/business risks. This is a key differentiator from traditional risk assessments.

Clause 7 (Support) ensures the organization provides necessary resources, including competent personnel. Staff working on AI systems must have appropriate competence through education, training, or experience. Documentation requirements are comprehensive and must be controlled.

Clause 8 (Operation) is the implementation clause where planned processes are executed. This is where Annex A controls are applied, AI impact assessments are conducted, and AI system lifecycle activities (design, development, testing, deployment, operation, retirement) are managed. Third-party AI system relationships are also governed here.

Clause 9 (Performance Evaluation) requires monitoring both AI system performance and AIMS effectiveness. Internal audits must be planned and conducted at regular intervals. Management reviews must evaluate the continuing suitability, adequacy, and effectiveness of the AIMS.

Clause 10 (Improvement) closes the PDCA loop by requiring organizations to address nonconformities with corrective actions and continually improve the AIMS's suitability, adequacy, and effectiveness.

Harmonized Structure Advantage

Because ISO 42001 uses the same Harmonized Structure as ISO 27001 and ISO 9001, exam questions may ask about integration benefits. Key point: Clauses 4-10 have the same numbering and general purpose across all HS-based standards. An organization already certified to ISO 27001 can leverage existing processes for leadership commitment (Clause 5), internal audits (Clause 9), and corrective actions (Clause 10).

Key Points
Clauses 4-10 follow the Harmonized Structure
Clause 5 requires top management commitment and AI policy
Clause 6 mandates AI risk assessment including societal impacts
Clause 8 is operational — where Annex A controls are applied
Clause 9 requires both AI system and AIMS evaluation
Integration with ISO 27001/9001 leverages existing processes

2.3 — Annex A Controls

Annex A of ISO 42001 contains 38 normative controls organized across multiple domains. These controls are not optional in the traditional sense — organizations must consider each control and either implement it or provide documented justification for its exclusion in a Statement of Applicability (SoA). This mirrors the approach used in ISO 27001's Annex A.

Annex A Control Domains
DomainFocus AreaExample Controls
A.2 — AI PoliciesOrganizational AI policy frameworkAI policy aligned with organizational objectives; communication of policies to stakeholders
A.3 — Internal OrganizationRoles, responsibilities, and reportingAssignment of AI responsibilities; separation of duties in AI development/deployment
A.4 — Resources for AI SystemsData, tools, infrastructure, and computeData management processes; infrastructure provisioning; toolchain management
A.5 — Assessing Impacts of AI SystemsAI impact assessment processesPre-deployment impact assessment; ongoing monitoring of AI system impacts on individuals and society
A.6 — AI System LifecycleDesign, development, testing, deployment, operation, retirementRequirements specification; design documentation; verification and validation; change management
A.7 — Data for AI SystemsData quality, provenance, and governanceData acquisition; data quality management; data labeling; bias in data; privacy protections
A.8 — Information for Interested PartiesTransparency and communicationDisclosure of AI system use; explanation of decisions; communication with affected parties
A.9 — Use of AI SystemsResponsible use policies and practicesResponsible use policies; human oversight requirements; monitoring of use
A.10 — Third-party / Supply ChainExternal AI components and providersDue diligence on third-party AI; contractual requirements; supply chain risk management

AI Impact Assessment controls (A.5) are particularly important. Organizations must assess the potential impact of AI systems on individuals, groups, and societies before deployment. This assessment must consider impacts on human rights, fairness, transparency, accountability, safety, and the environment. Impact assessments must be reviewed and updated throughout the AI system lifecycle.

Data Management controls (A.7) address data quality, data provenance, data labeling, data preprocessing, bias in data, and privacy-preserving techniques. Organizations must ensure training data is representative, appropriate for the intended use, and free from harmful biases. Data lineage must be documented.

AI System Lifecycle controls (A.6) cover every phase from design through retirement. Each phase has specific requirements including documentation, review, and approval processes. Testing must include functional testing, performance testing, fairness testing, and security testing.

Third-party and Supply Chain controls (A.10) require due diligence on AI components sourced from external providers, including open-source models, APIs, and datasets. Organizations remain responsible for AI risks even when using third-party components — outsourcing does not transfer risk accountability.

Implement or Justify Exclusion

Every Annex A control must be addressed in the Statement of Applicability (SoA). For each of the 38 controls, the organization must either implement the control with evidence of implementation, or provide a documented, risk-based justification for its exclusion. Simply ignoring a control is a nonconformity that auditors will flag. This is one of the most common audit findings.

Key Points
38 normative controls across multiple domains
Must implement or justify exclusion of each control (SoA)
AI Impact Assessment required before deployment
Data management covers quality, provenance, and bias
Third-party due diligence is mandatory
Lifecycle controls cover design through retirement

2.4 — Certification Process and Integration

ISO 42001 certification demonstrates to stakeholders that an organization has a functioning, audited AI Management System. The certification process follows the same model as other ISO management system certifications and is conducted by accredited certification bodies operating under ISO/IEC 42006 requirements.

Certification Stages
01
Stage 1 — Documentation Review

The certification body reviews the organization's AIMS documentation: AI policy, scope statement, risk assessment methodology, Statement of Applicability, AI impact assessments, and procedures. Identifies readiness for Stage 2 and any gaps to address.

02
Stage 2 — Implementation Audit

On-site (or remote) evaluation of AIMS implementation effectiveness. Auditors verify that documented processes are actually followed, controls are operating effectively, and evidence of compliance exists. Nonconformities are raised and must be resolved.

03
Certification Granted

Upon successful completion of Stage 2 and resolution of any major nonconformities, the certification body issues a certificate valid for 3 years.

04
Annual Surveillance Audits

Conducted annually to verify continued compliance and improvement. Surveillance audits are smaller in scope than Stage 2 but still examine key areas and any changes since the last audit.

05
Recertification (Year 3)

Full audit conducted before the 3-year certificate expires. Evaluates the overall effectiveness of the AIMS and issues a new 3-year certificate upon success.

Certification Lifecycle
Stage 1 Audit
Documentation review
Stage 2 Audit
Implementation evaluation
Certification
3-year certificate issued
Surveillance
Annual compliance checks

Integration with Other Management Systems

Integration with ISO 27001 (Information Security) is natural since both share the Harmonized Structure. Many AI risks overlap with information security risks — data protection, access control, incident management, and supply chain security. Organizations with existing ISO 27001 certification can extend their ISMS to include AIMS requirements, sharing processes for risk assessment, internal audit, management review, and corrective action.

Integration with ISO 9001 (Quality Management) ensures AI systems meet quality standards. The PDCA cycle and process approach are common to both standards. Quality management principles like customer focus, evidence-based decision-making, and continual improvement directly apply to AI system development and operation.

ISO 42001 vs ISO 27001 vs ISO 9001 — Integration Points
Aspect
ISO 42001 (AI)
ISO 27001 (InfoSec)
ISO 9001 (Quality)
Primary Focus
Responsible AI development and use
Information security management
Quality management for products/services
Annex Controls
38 controls (AI-specific: impact assessment, data, lifecycle)
93 controls (Annex A, 2022 revision)
No Annex A (requirements embedded in clauses)
Risk Assessment Scope
Individuals, groups, society + organizational risks
Confidentiality, integrity, availability of information
Product/service quality, customer satisfaction
Key Documentation
AI policy, AI impact assessments, SoA
ISMS policy, risk treatment plan, SoA
Quality policy, quality objectives, process documentation
Audit Approach
Stage 1 + Stage 2, 3-year cycle
Stage 1 + Stage 2, 3-year cycle
Stage 1 + Stage 2, 3-year cycle
Common PDCA Elements
Leadership, planning, support, operations, evaluation, improvement
Leadership, planning, support, operations, evaluation, improvement
Leadership, planning, support, operations, evaluation, improvement

Documentation requirements for ISO 42001 include: AI policy, scope statement, risk assessment methodology, AI impact assessments, Statement of Applicability (for Annex A controls), AI system inventory, and records of competence, monitoring, audits, and management reviews. Organizations integrating with ISO 27001 can often combine documentation where requirements overlap.

Integration Strategy

For exam questions on integration: emphasize that the Harmonized Structure is the key enabler. An integrated management system (IMS) uses a single set of processes for internal audit, management review, corrective action, and documentation control — with domain-specific extensions for AI (42001), information security (27001), and quality (9001). This reduces duplication and audit fatigue.

Key Points
Two-stage audit: documentation review + implementation evaluation
3-year certification with annual surveillance
Natural integration with ISO 27001 and ISO 9001
Statement of Applicability required for Annex A controls
Comprehensive documentation requirements
Harmonized Structure enables integrated management systems
// Practice Questions
Q1: When was ISO/IEC 42001 published, and what does it specify?
Show Answer

Published December 2023, it specifies requirements for an AI Management System (AIMS) — the first international standard for AI management systems. It is certifiable, follows the Harmonized Structure, and uses the PDCA cycle.

Q2: How many controls does Annex A contain, and what must organizations do with each one?
Show Answer

Annex A contains 38 normative controls. Organizations must either implement each control or provide a documented, risk-based justification for its exclusion in their Statement of Applicability (SoA). Simply ignoring a control is a nonconformity.

Q3: Describe the ISO 42001 certification process, including the audit stages and certification lifecycle.
Show Answer

A two-stage third-party audit: Stage 1 reviews documentation and AIMS design; Stage 2 evaluates implementation effectiveness on-site. Certification is valid for 3 years with annual surveillance audits. Recertification requires a full audit before the certificate expires.

Q4: How does ISO 42001 integrate with ISO 27001, and what makes this integration natural?
Show Answer

Both follow the Harmonized Structure (same clause numbering 4-10, same PDCA cycle). AI risks overlap with information security risks. Organizations can extend existing ISMS to include AIMS requirements, sharing processes for risk assessment, internal audit, management review, and corrective action. An integrated management system reduces duplication.

Q5: What is the difference between the risk assessment requirements in ISO 42001 versus ISO 27001?
Show Answer

ISO 42001 (Clause 6) requires risk assessment to consider impacts on individuals, groups, and society — not just organizational/business risks like confidentiality, integrity, and availability (ISO 27001's focus). ISO 42001's risk scope includes human rights, fairness, safety, and societal impacts, which is broader than traditional information security risk assessment.

Q6: Explain the role of the Statement of Applicability (SoA) in ISO 42001. How is it similar to ISO 27001's SoA?
Show Answer

The SoA documents which of the 38 Annex A controls are implemented and which are excluded, with justification for each exclusion. It mirrors ISO 27001's SoA concept (which covers 93 controls). Both serve as a key audit artifact — auditors verify that every control is addressed. The SoA is a mandatory document for certification.

Q7: What are the key domains covered by Annex A controls? Name at least five.
Show Answer

Annex A domains include: A.2 AI Policies, A.3 Internal Organization, A.4 Resources for AI Systems, A.5 Assessing Impacts of AI Systems, A.6 AI System Lifecycle, A.7 Data for AI Systems, A.8 Information for Interested Parties, A.9 Use of AI Systems, and A.10 Third-party/Supply Chain.

Q8: What is the PDCA cycle and how does it apply to ISO 42001? Give a concrete example for each phase.
Show Answer

PDCA = Plan-Do-Check-Act. Plan: Establish AI policy and conduct risk assessments. Do: Implement Annex A controls and deploy AI systems with safeguards. Check: Monitor AI system performance, conduct internal audits, review effectiveness. Act: Address nonconformities with corrective actions and drive continual improvement. The cycle repeats continuously, ensuring the AIMS evolves with changing risks and organizational needs.

01. NIST AI Risk Management Framework03. EU AI Act