← All Modules
MODULE 03 · ~3 hrs

EU AI Act

Complete analysis of the EU Artificial Intelligence Act (Regulation 2024/1689) — the world's first comprehensive AI legislation. Covers the risk-based classification, prohibited practices, high-risk requirements, GPAI obligations, and enforcement timeline.

3.1 — Overview and Structure

The EU AI Act (Regulation 2024/1689) was adopted in June 2024 and entered into force on August 1, 2024. It is the world's first comprehensive legal framework for AI, applying to providers, deployers, importers, and distributors of AI systems within the EU market.

The Act uses a risk-based approach with four tiers: Unacceptable Risk (banned), High Risk (heavily regulated), Limited Risk (transparency obligations), and Minimal Risk (no specific obligations). This tiered approach ensures regulation is proportionate to the level of risk posed by the AI system.

EU AI Act Risk Tiers
Unacceptable Risk
PROHIBITED — AI practices that pose a clear threat to safety, livelihoods, or rights. These are banned outright (e.g., social scoring, subliminal manipulation).
High Risk
HEAVILY REGULATED — AI in critical areas (biometrics, critical infrastructure, employment, law enforcement). Must meet strict requirements: risk management, data governance, transparency, human oversight, CE marking.
Limited Risk
TRANSPARENCY OBLIGATIONS — AI systems that interact with humans (chatbots), generate synthetic content (deepfakes), or perform emotion recognition. Must disclose AI use to users.
Minimal Risk
NO SPECIFIC OBLIGATIONS — The vast majority of AI systems (spam filters, AI in video games, inventory management). Free to operate under existing law. Voluntary codes of conduct encouraged.

Extraterritorial scope: The Act applies not just to EU-based organizations but to any entity placing AI systems on the EU market or whose AI system outputs are used in the EU. This means non-EU companies (including U.S. and Asian tech companies) must comply if they serve EU users — similar to GDPR's extraterritorial reach.

AI System — Official Definition (Article 3)

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This definition is intentionally broad and aligned with the OECD definition of AI.

The Act distinguishes between several key roles: Providers (develop or place AI on the market), Deployers (use AI systems under their authority), Importers (place non-EU AI on the EU market), and Distributors (make AI available in the supply chain). Each role carries specific obligations proportionate to their relationship with the AI system.

Key Points
Adopted June 2024, entered into force August 1, 2024
Risk-based approach: four tiers of regulation
Extraterritorial scope — applies to non-EU companies serving EU
Broad definition of 'AI system' aligned with OECD definition
Phased enforcement timeline (2025–2027)
Four key roles: Provider, Deployer, Importer, Distributor

3.2 — Prohibited AI Practices (Article 5)

Article 5 of the EU AI Act defines AI practices that are entirely prohibited due to the unacceptable risks they pose to fundamental rights, safety, and democratic values. These prohibitions are the first to take effect, becoming enforceable on February 2, 2025.

Prohibited AI Practices — Complete List
#PracticeDescriptionExceptions
1Subliminal ManipulationAI deploying subliminal techniques beyond a person's consciousness to materially distort behavior, causing or likely to cause significant harm.None
2Exploitation of VulnerabilitiesAI exploiting vulnerabilities of specific groups due to age, disability, or social/economic situation to materially distort behavior causing significant harm.None
3Social Scoring (Public Authorities)AI evaluating or classifying individuals based on social behavior or personality characteristics, leading to detrimental treatment disproportionate or unrelated to the original context.None
4Predictive Policing (Individual-Level)AI predicting individual criminal behavior based solely on profiling or personality traits, without objective, verifiable facts linked to criminal activity.Systems that support human assessment based on objective facts are allowed
5Untargeted Facial ScrapingScraping facial images from the internet or CCTV footage to create or expand facial recognition databases.None
6Emotion Recognition (Workplace/Education)AI inferring emotions of individuals in workplace and educational settings.Safety or medical purposes (e.g., detecting fatigue in safety-critical roles)
7Biometric Categorization (Sensitive Attributes)AI categorizing individuals based on biometric data to infer sensitive attributes (race, political opinions, sexual orientation, religious beliefs).Lawfully acquired biometric data used in law enforcement for filtering purposes
8Real-Time Remote Biometric ID (Public Spaces)Real-time remote biometric identification in publicly accessible spaces for law enforcement.Narrowly: search for specific serious crime victims/missing persons, prevention of imminent terrorist threat, locating/identifying suspects of specific serious crimes — all require judicial authorization
Effective February 2, 2025

Prohibited AI practices were the FIRST provisions to become enforceable — as of February 2, 2025. Organizations must have already discontinued any prohibited practices. Penalties for violations are the highest under the Act: up to 35 million EUR or 7% of global annual turnover, whichever is higher.

Note that 'social scoring' is only prohibited when performed by or on behalf of public authorities. Private sector loyalty programs and credit scoring are not banned under this provision, though they may be subject to other regulations. Also, real-time biometric identification has narrow law enforcement exceptions that always require judicial authorization.

Key Points
8 categories of prohibited AI practices
Subliminal manipulation and exploitation banned with no exceptions
Social scoring by public authorities banned
Real-time biometric ID banned with narrow law enforcement exceptions
Emotion recognition banned in workplaces and schools (safety/medical exceptions)
Prohibitions effective from February 2, 2025
Highest penalties: up to 35M EUR or 7% global turnover

3.3 — High-Risk AI Systems (Articles 6–49)

High-risk AI systems are the most heavily regulated category under the EU AI Act. They are subject to comprehensive requirements including risk management, data governance, technical documentation, human oversight, accuracy, robustness, and cybersecurity standards. There are two distinct pathways for classifying an AI system as high-risk.

Category 1: AI systems that are safety components of products already covered by EU harmonization legislation (e.g., medical devices under MDR, machinery under the Machinery Regulation, toys, aviation systems) AND require a third-party conformity assessment under that legislation.

Category 2: AI systems listed in Annex III of the Act, covering specific high-risk use cases across eight areas. These standalone AI systems are considered high-risk regardless of whether they are embedded in a larger product.

Annex III — High-Risk AI System Areas
#AreaExamples
1Biometric Identification and CategorizationRemote biometric identification (non-real-time), biometric categorization by sensitive attributes
2Critical Infrastructure ManagementAI managing road traffic safety, water/gas/heating/electricity supply
3Education and Vocational TrainingAI determining access to education, evaluating learning outcomes, monitoring exam integrity
4Employment and Worker ManagementAI for recruitment, screening, interview evaluation, promotion decisions, task allocation, performance monitoring, termination
5Access to Essential ServicesAI assessing creditworthiness, evaluating insurance risk/pricing, evaluating eligibility for public benefits
6Law EnforcementAI for risk assessment of natural persons, polygraph/emotion detection, evidence analysis, crime prediction (at area level)
7Migration, Asylum, and Border ControlAI for risk assessment of migrants, document authentication, application evaluation
8Administration of Justice and DemocracyAI assisting judicial authorities in researching/interpreting facts and law, used in elections/referendums

Provider vs Deployer Obligations

High-Risk AI Obligations by Role
Obligation
Provider
Deployer
Risk Management System
Must implement throughout lifecycle
N/A (provider obligation)
Data Governance
Must ensure quality, relevance, representativeness of datasets
Must ensure input data is relevant
Technical Documentation
Must create and maintain comprehensive documentation
N/A (provider obligation)
Record-Keeping (Logging)
Must design system to enable automatic logging
Must keep logs generated by the system
Human Oversight
Must design system to allow effective human oversight
Must assign competent individuals for oversight
Conformity Assessment
Must conduct (self or third-party depending on area)
N/A (provider obligation)
EU Database Registration
Must register before placing on market
Must register use in certain cases
CE Marking
Must affix CE marking
N/A
Post-Market Monitoring
Must establish and maintain monitoring system
Must monitor system in operation
Serious Incident Reporting
Must report to national authorities
Must report to provider and/or authorities
Fundamental Rights Impact Assessment
N/A (deployer obligation)
Must conduct (public bodies + certain private entities)
User/Deployer Information
Must provide to deployers
Must inform affected individuals
Use in Accordance with Instructions
N/A
Must follow provider instructions
Log Retention (min 6 months)
N/A
Must retain logs at least 6 months
Provider vs Deployer Distinction

Exam questions frequently test the distinction between provider and deployer obligations. Key differentiators: Providers build the system and are responsible for design-time obligations (technical documentation, conformity assessment, CE marking). Deployers use the system and are responsible for use-time obligations (human oversight assignment, fundamental rights impact assessment, log retention, informing affected individuals). If a deployer modifies a high-risk AI system substantially, they become the provider.

Key Points
Two categories: safety components + Annex III listed systems
Mandatory risk management system throughout lifecycle
CE marking and EU database registration required
Human oversight is mandatory for all high-risk systems
Deployers must conduct fundamental rights impact assessments
Serious incidents must be reported to authorities
Substantial modification by deployer triggers provider obligations

3.4 — General-Purpose AI (GPAI) and Foundation Models

General-Purpose AI (GPAI) models receive dedicated rules under the EU AI Act, recognizing that foundation models and large language models present unique challenges that do not fit neatly into the risk-based classification of AI systems. GPAI rules create a two-tier system: baseline obligations for all GPAI models, and additional obligations for models posing systemic risk.

A GPAI model is defined as an AI model — including when trained with large amounts of data using self-supervision at scale — that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market. This covers large language models, multimodal models, and other foundation models.

GPAI Obligations Flow
Is it a GPAI model?
Trained at scale, significant generality, performs wide range of tasks
Baseline Obligations
Technical documentation + copyright compliance + training data summary
Systemic Risk Assessment
Does the model exceed 10^25 FLOPs or is it designated by the AI Office?
Additional Obligations
Adversarial testing + risk mitigation + incident reporting + cybersecurity
GPAI Obligations — Baseline vs Systemic Risk
ObligationAll GPAI ModelsSystemic Risk Models
Technical DocumentationRequired — including model architecture, training methodology, evaluation resultsRequired — enhanced detail including safety evaluations
EU Copyright Law ComplianceRequired — must comply with text and data mining rules, respect opt-outsRequired
Training Data SummaryRequired — publish sufficiently detailed summary of training contentRequired
Model Evaluation (Red-Teaming)Not requiredRequired — standardized model evaluation including adversarial testing
Systemic Risk AssessmentNot requiredRequired — identify, assess, and mitigate systemic risks
Incident ReportingNot requiredRequired — report serious incidents to the AI Office
Cybersecurity ProtectionsNot requiredRequired — ensure adequate level of cybersecurity protection
Codes of PracticeMay follow to demonstrate complianceMay follow; presumption of conformity if followed
Open-Source ExemptionsExempted from some requirements (tech docs, copyright summary) unless systemic riskNo exemptions — full obligations apply regardless of licensing

The systemic risk threshold is set at 10^25 FLOPs (floating-point operations) of cumulative compute used for training. Models at or above this threshold are presumed to have systemic risk. The AI Office can also designate models below this threshold as having systemic risk based on other criteria such as the number of registered end users, the model's impact on the internal market, or its reach.

The AI Office (established within the European Commission) oversees GPAI compliance. Codes of practice are being developed in collaboration with GPAI providers and other stakeholders to provide detailed guidance on how to meet GPAI obligations. The first codes were expected by May 2025.

Free and open-source GPAI models have partial exemptions: they are not required to provide full technical documentation or a detailed training data summary, unless they pose systemic risk. However, all GPAI models — including open-source — must comply with copyright rules and prohibited practice restrictions.

GPAI Key Numbers

Remember the systemic risk compute threshold: 10^25 FLOPs (10 to the power of 25). This is a frequently tested number. Also: open-source exemptions do NOT apply to systemic risk models. And: GPAI providers are NOT the same as high-risk AI system providers — GPAI has its own separate regulatory regime under the Act.

Key Points
All GPAI: technical docs + copyright compliance + training data summary
Systemic risk threshold: >10^25 FLOPs or AI Office designation
Systemic risk: adversarial testing + incident reporting + cybersecurity
AI Office oversees GPAI compliance
Open-source exemptions (except for systemic risk models)
Codes of practice provide implementation guidance

3.5 — Enforcement and Timeline

The EU AI Act uses a phased enforcement timeline, with different provisions taking effect at different dates. This approach gives organizations time to prepare for compliance while ensuring the most critical prohibitions take effect first. Understanding this timeline is essential for exam preparation.

EU AI Act Enforcement Timeline
Aug 1, 2024
Entry into Force
The EU AI Act officially enters into force. The countdown begins for all phased deadlines.
Feb 2, 2025
Prohibited Practices + AI Literacy
Prohibited AI practices (Article 5) become enforceable. AI literacy obligations (Article 4) also take effect. Organizations must have discontinued all banned practices and begun AI literacy training.
Aug 2, 2025
GPAI Rules
Rules for General-Purpose AI models take effect. GPAI providers must comply with documentation, copyright, and training data summary obligations. Systemic risk models must meet additional requirements.
Aug 2, 2026
High-Risk Annex III Systems
Full requirements for high-risk AI systems listed in Annex III (standalone AI in biometrics, employment, education, law enforcement, etc.) become enforceable.
Aug 2, 2027
High-Risk Annex I Systems
Requirements for high-risk AI systems that are safety components of products covered by EU harmonization legislation (Annex I) become enforceable.

Penalties and Enforcement Structure

Penalty Structure
Violation TypeMaximum Fine (EUR)Alternative (% Global Turnover)SME/Startup Cap
Prohibited AI PracticesUp to 35 million EUR7% of global annual turnoverProportionate, lower caps apply
High-Risk System ViolationsUp to 15 million EUR3% of global annual turnoverProportionate, lower caps apply
GPAI Model ViolationsUp to 15 million EUR3% of global annual turnoverProportionate, lower caps apply
Incorrect/Misleading Information to AuthoritiesUp to 7.5 million EUR1% of global annual turnoverProportionate, lower caps apply

National Competent Authorities and Market Surveillance Authorities enforce the Act in each EU member state. Each member state must designate at least one notifying authority and one market surveillance authority. The European AI Office (within the European Commission) handles enforcement for GPAI models at the EU level and coordinates across member states.

AI Literacy (Article 4) is a cross-cutting obligation that took effect alongside the prohibitions in February 2025. It requires all providers and deployers to ensure sufficient AI literacy among their staff and other persons dealing with the operation and use of AI systems on their behalf. The level of literacy must account for technical knowledge, experience, education and training, the context of use, and the persons or groups affected.

AI Literacy — Often Overlooked

AI Literacy (Article 4) is frequently tested because candidates overlook it. Key facts: (1) It applies to ALL providers and deployers, not just high-risk. (2) It took effect in February 2025, making it one of the EARLIEST obligations. (3) It is NOT limited to technical staff — it covers anyone dealing with AI systems. (4) The required literacy level is context-dependent, not a fixed standard.

The penalty amounts are the HIGHER of the fixed EUR amount or the percentage of global annual turnover, whichever is greater. For companies that are part of a larger group, global annual turnover refers to the entire group's worldwide turnover. SMEs and startups receive proportionate caps to avoid disproportionate penalties that could threaten their viability.

Key Points
Fines up to 35M EUR or 7% global turnover (whichever is higher)
Phased timeline: Feb 2025 → Aug 2025 → Aug 2026 → Aug 2027
AI literacy obligation effective February 2025 — applies to ALL
National authorities + AI Office enforce jointly
SMEs receive proportionate penalty caps
Each member state designates competent authorities
// Practice Questions
Q1: What are the four risk tiers in the EU AI Act, and give an example for each?
Show Answer

Unacceptable Risk (prohibited — e.g., social scoring by public authorities), High Risk (heavily regulated — e.g., AI in employment recruitment), Limited Risk (transparency obligations — e.g., chatbots must disclose AI use), and Minimal Risk (no specific AI Act obligations — e.g., spam filters, AI in video games).

Q2: Name all eight prohibited AI practices under Article 5. Which were the first provisions to take effect?
Show Answer

1) Subliminal manipulation, 2) Exploitation of vulnerabilities, 3) Social scoring by public authorities, 4) Individual-level predictive policing, 5) Untargeted facial scraping, 6) Emotion recognition in workplace/education, 7) Biometric categorization for sensitive attributes, 8) Real-time remote biometric ID in public spaces (with narrow exceptions). These took effect February 2, 2025 — the earliest enforceable provisions.

Q3: What additional obligations apply to GPAI models with systemic risk? What is the compute threshold?
Show Answer

Systemic risk models must: conduct standardized model evaluation including adversarial testing (red-teaming), assess and mitigate systemic risks, report serious incidents to the AI Office, and ensure adequate cybersecurity protections. The threshold is >10^25 FLOPs of cumulative training compute, or designation by the AI Office based on other criteria.

Q4: What is the complete enforcement timeline for the EU AI Act? List all key dates.
Show Answer

August 1, 2024: Entry into force. February 2, 2025: Prohibited practices + AI literacy. August 2, 2025: GPAI rules. August 2, 2026: High-risk Annex III (standalone AI in biometrics, employment, etc.). August 2, 2027: High-risk Annex I (AI as safety components of products).

Q5: What is the difference between a 'provider' and a 'deployer' under the EU AI Act? What happens if a deployer substantially modifies a high-risk AI system?
Show Answer

Providers develop or place AI systems on the market and bear design-time obligations (technical documentation, conformity assessment, CE marking, post-market monitoring). Deployers use AI systems under their authority and bear use-time obligations (human oversight, fundamental rights impact assessment, log retention, informing individuals). If a deployer substantially modifies a high-risk system, they become the provider and assume all provider obligations.

Q6: Explain the AI Literacy obligation under Article 4. Who does it apply to, and when did it take effect?
Show Answer

Article 4 requires ALL providers and deployers to ensure sufficient AI literacy among staff and any persons dealing with the operation and use of AI systems on their behalf. It is not limited to technical staff. The required level is context-dependent, considering technical knowledge, experience, education, context of use, and affected persons. It took effect February 2, 2025 — one of the earliest obligations.

Q7: What are the penalty tiers under the EU AI Act? How are penalties calculated for large corporations?
Show Answer

Three tiers: (1) Prohibited practices: up to 35M EUR or 7% global annual turnover; (2) High-risk/GPAI violations: up to 15M EUR or 3%; (3) Incorrect information to authorities: up to 7.5M EUR or 1%. The HIGHER of the fixed amount or percentage applies. For group companies, 'global annual turnover' refers to the entire group's worldwide turnover. SMEs receive proportionate lower caps.

Q8: How does the EU AI Act handle open-source GPAI models? Are they exempt from all requirements?
Show Answer

Open-source GPAI models receive partial exemptions: they are not required to provide full technical documentation or a detailed training data summary. However, they MUST still comply with EU copyright law and prohibited practice rules. Critically, if an open-source model poses systemic risk (>10^25 FLOPs or AI Office designation), ALL exemptions are removed and full systemic risk obligations apply.

02. ISO/IEC 42001 — AI Management System04. India DPDP Act + RBI AI/ML Guidelines