EU AI Act
Complete analysis of the EU Artificial Intelligence Act (Regulation 2024/1689) — the world's first comprehensive AI legislation. Covers the risk-based classification, prohibited practices, high-risk requirements, GPAI obligations, and enforcement timeline.
3.1 — Overview and Structure
The EU AI Act (Regulation 2024/1689) was adopted in June 2024 and entered into force on August 1, 2024. It is the world's first comprehensive legal framework for AI, applying to providers, deployers, importers, and distributors of AI systems within the EU market.
The Act uses a risk-based approach with four tiers: Unacceptable Risk (banned), High Risk (heavily regulated), Limited Risk (transparency obligations), and Minimal Risk (no specific obligations). This tiered approach ensures regulation is proportionate to the level of risk posed by the AI system.
Extraterritorial scope: The Act applies not just to EU-based organizations but to any entity placing AI systems on the EU market or whose AI system outputs are used in the EU. This means non-EU companies (including U.S. and Asian tech companies) must comply if they serve EU users — similar to GDPR's extraterritorial reach.
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This definition is intentionally broad and aligned with the OECD definition of AI.
The Act distinguishes between several key roles: Providers (develop or place AI on the market), Deployers (use AI systems under their authority), Importers (place non-EU AI on the EU market), and Distributors (make AI available in the supply chain). Each role carries specific obligations proportionate to their relationship with the AI system.
3.2 — Prohibited AI Practices (Article 5)
Article 5 of the EU AI Act defines AI practices that are entirely prohibited due to the unacceptable risks they pose to fundamental rights, safety, and democratic values. These prohibitions are the first to take effect, becoming enforceable on February 2, 2025.
| # | Practice | Description | Exceptions |
|---|---|---|---|
| 1 | Subliminal Manipulation | AI deploying subliminal techniques beyond a person's consciousness to materially distort behavior, causing or likely to cause significant harm. | None |
| 2 | Exploitation of Vulnerabilities | AI exploiting vulnerabilities of specific groups due to age, disability, or social/economic situation to materially distort behavior causing significant harm. | None |
| 3 | Social Scoring (Public Authorities) | AI evaluating or classifying individuals based on social behavior or personality characteristics, leading to detrimental treatment disproportionate or unrelated to the original context. | None |
| 4 | Predictive Policing (Individual-Level) | AI predicting individual criminal behavior based solely on profiling or personality traits, without objective, verifiable facts linked to criminal activity. | Systems that support human assessment based on objective facts are allowed |
| 5 | Untargeted Facial Scraping | Scraping facial images from the internet or CCTV footage to create or expand facial recognition databases. | None |
| 6 | Emotion Recognition (Workplace/Education) | AI inferring emotions of individuals in workplace and educational settings. | Safety or medical purposes (e.g., detecting fatigue in safety-critical roles) |
| 7 | Biometric Categorization (Sensitive Attributes) | AI categorizing individuals based on biometric data to infer sensitive attributes (race, political opinions, sexual orientation, religious beliefs). | Lawfully acquired biometric data used in law enforcement for filtering purposes |
| 8 | Real-Time Remote Biometric ID (Public Spaces) | Real-time remote biometric identification in publicly accessible spaces for law enforcement. | Narrowly: search for specific serious crime victims/missing persons, prevention of imminent terrorist threat, locating/identifying suspects of specific serious crimes — all require judicial authorization |
Prohibited AI practices were the FIRST provisions to become enforceable — as of February 2, 2025. Organizations must have already discontinued any prohibited practices. Penalties for violations are the highest under the Act: up to 35 million EUR or 7% of global annual turnover, whichever is higher.
Note that 'social scoring' is only prohibited when performed by or on behalf of public authorities. Private sector loyalty programs and credit scoring are not banned under this provision, though they may be subject to other regulations. Also, real-time biometric identification has narrow law enforcement exceptions that always require judicial authorization.
3.3 — High-Risk AI Systems (Articles 6–49)
High-risk AI systems are the most heavily regulated category under the EU AI Act. They are subject to comprehensive requirements including risk management, data governance, technical documentation, human oversight, accuracy, robustness, and cybersecurity standards. There are two distinct pathways for classifying an AI system as high-risk.
Category 1: AI systems that are safety components of products already covered by EU harmonization legislation (e.g., medical devices under MDR, machinery under the Machinery Regulation, toys, aviation systems) AND require a third-party conformity assessment under that legislation.
Category 2: AI systems listed in Annex III of the Act, covering specific high-risk use cases across eight areas. These standalone AI systems are considered high-risk regardless of whether they are embedded in a larger product.
| # | Area | Examples |
|---|---|---|
| 1 | Biometric Identification and Categorization | Remote biometric identification (non-real-time), biometric categorization by sensitive attributes |
| 2 | Critical Infrastructure Management | AI managing road traffic safety, water/gas/heating/electricity supply |
| 3 | Education and Vocational Training | AI determining access to education, evaluating learning outcomes, monitoring exam integrity |
| 4 | Employment and Worker Management | AI for recruitment, screening, interview evaluation, promotion decisions, task allocation, performance monitoring, termination |
| 5 | Access to Essential Services | AI assessing creditworthiness, evaluating insurance risk/pricing, evaluating eligibility for public benefits |
| 6 | Law Enforcement | AI for risk assessment of natural persons, polygraph/emotion detection, evidence analysis, crime prediction (at area level) |
| 7 | Migration, Asylum, and Border Control | AI for risk assessment of migrants, document authentication, application evaluation |
| 8 | Administration of Justice and Democracy | AI assisting judicial authorities in researching/interpreting facts and law, used in elections/referendums |
Provider vs Deployer Obligations
Exam questions frequently test the distinction between provider and deployer obligations. Key differentiators: Providers build the system and are responsible for design-time obligations (technical documentation, conformity assessment, CE marking). Deployers use the system and are responsible for use-time obligations (human oversight assignment, fundamental rights impact assessment, log retention, informing affected individuals). If a deployer modifies a high-risk AI system substantially, they become the provider.
3.4 — General-Purpose AI (GPAI) and Foundation Models
General-Purpose AI (GPAI) models receive dedicated rules under the EU AI Act, recognizing that foundation models and large language models present unique challenges that do not fit neatly into the risk-based classification of AI systems. GPAI rules create a two-tier system: baseline obligations for all GPAI models, and additional obligations for models posing systemic risk.
A GPAI model is defined as an AI model — including when trained with large amounts of data using self-supervision at scale — that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market. This covers large language models, multimodal models, and other foundation models.
| Obligation | All GPAI Models | Systemic Risk Models |
|---|---|---|
| Technical Documentation | Required — including model architecture, training methodology, evaluation results | Required — enhanced detail including safety evaluations |
| EU Copyright Law Compliance | Required — must comply with text and data mining rules, respect opt-outs | Required |
| Training Data Summary | Required — publish sufficiently detailed summary of training content | Required |
| Model Evaluation (Red-Teaming) | Not required | Required — standardized model evaluation including adversarial testing |
| Systemic Risk Assessment | Not required | Required — identify, assess, and mitigate systemic risks |
| Incident Reporting | Not required | Required — report serious incidents to the AI Office |
| Cybersecurity Protections | Not required | Required — ensure adequate level of cybersecurity protection |
| Codes of Practice | May follow to demonstrate compliance | May follow; presumption of conformity if followed |
| Open-Source Exemptions | Exempted from some requirements (tech docs, copyright summary) unless systemic risk | No exemptions — full obligations apply regardless of licensing |
The systemic risk threshold is set at 10^25 FLOPs (floating-point operations) of cumulative compute used for training. Models at or above this threshold are presumed to have systemic risk. The AI Office can also designate models below this threshold as having systemic risk based on other criteria such as the number of registered end users, the model's impact on the internal market, or its reach.
The AI Office (established within the European Commission) oversees GPAI compliance. Codes of practice are being developed in collaboration with GPAI providers and other stakeholders to provide detailed guidance on how to meet GPAI obligations. The first codes were expected by May 2025.
Free and open-source GPAI models have partial exemptions: they are not required to provide full technical documentation or a detailed training data summary, unless they pose systemic risk. However, all GPAI models — including open-source — must comply with copyright rules and prohibited practice restrictions.
Remember the systemic risk compute threshold: 10^25 FLOPs (10 to the power of 25). This is a frequently tested number. Also: open-source exemptions do NOT apply to systemic risk models. And: GPAI providers are NOT the same as high-risk AI system providers — GPAI has its own separate regulatory regime under the Act.
3.5 — Enforcement and Timeline
The EU AI Act uses a phased enforcement timeline, with different provisions taking effect at different dates. This approach gives organizations time to prepare for compliance while ensuring the most critical prohibitions take effect first. Understanding this timeline is essential for exam preparation.
Penalties and Enforcement Structure
| Violation Type | Maximum Fine (EUR) | Alternative (% Global Turnover) | SME/Startup Cap |
|---|---|---|---|
| Prohibited AI Practices | Up to 35 million EUR | 7% of global annual turnover | Proportionate, lower caps apply |
| High-Risk System Violations | Up to 15 million EUR | 3% of global annual turnover | Proportionate, lower caps apply |
| GPAI Model Violations | Up to 15 million EUR | 3% of global annual turnover | Proportionate, lower caps apply |
| Incorrect/Misleading Information to Authorities | Up to 7.5 million EUR | 1% of global annual turnover | Proportionate, lower caps apply |
National Competent Authorities and Market Surveillance Authorities enforce the Act in each EU member state. Each member state must designate at least one notifying authority and one market surveillance authority. The European AI Office (within the European Commission) handles enforcement for GPAI models at the EU level and coordinates across member states.
AI Literacy (Article 4) is a cross-cutting obligation that took effect alongside the prohibitions in February 2025. It requires all providers and deployers to ensure sufficient AI literacy among their staff and other persons dealing with the operation and use of AI systems on their behalf. The level of literacy must account for technical knowledge, experience, education and training, the context of use, and the persons or groups affected.
AI Literacy (Article 4) is frequently tested because candidates overlook it. Key facts: (1) It applies to ALL providers and deployers, not just high-risk. (2) It took effect in February 2025, making it one of the EARLIEST obligations. (3) It is NOT limited to technical staff — it covers anyone dealing with AI systems. (4) The required literacy level is context-dependent, not a fixed standard.
The penalty amounts are the HIGHER of the fixed EUR amount or the percentage of global annual turnover, whichever is greater. For companies that are part of a larger group, global annual turnover refers to the entire group's worldwide turnover. SMEs and startups receive proportionate caps to avoid disproportionate penalties that could threaten their viability.
Show Answer
Unacceptable Risk (prohibited — e.g., social scoring by public authorities), High Risk (heavily regulated — e.g., AI in employment recruitment), Limited Risk (transparency obligations — e.g., chatbots must disclose AI use), and Minimal Risk (no specific AI Act obligations — e.g., spam filters, AI in video games).
Show Answer
1) Subliminal manipulation, 2) Exploitation of vulnerabilities, 3) Social scoring by public authorities, 4) Individual-level predictive policing, 5) Untargeted facial scraping, 6) Emotion recognition in workplace/education, 7) Biometric categorization for sensitive attributes, 8) Real-time remote biometric ID in public spaces (with narrow exceptions). These took effect February 2, 2025 — the earliest enforceable provisions.
Show Answer
Systemic risk models must: conduct standardized model evaluation including adversarial testing (red-teaming), assess and mitigate systemic risks, report serious incidents to the AI Office, and ensure adequate cybersecurity protections. The threshold is >10^25 FLOPs of cumulative training compute, or designation by the AI Office based on other criteria.
Show Answer
August 1, 2024: Entry into force. February 2, 2025: Prohibited practices + AI literacy. August 2, 2025: GPAI rules. August 2, 2026: High-risk Annex III (standalone AI in biometrics, employment, etc.). August 2, 2027: High-risk Annex I (AI as safety components of products).
Show Answer
Providers develop or place AI systems on the market and bear design-time obligations (technical documentation, conformity assessment, CE marking, post-market monitoring). Deployers use AI systems under their authority and bear use-time obligations (human oversight, fundamental rights impact assessment, log retention, informing individuals). If a deployer substantially modifies a high-risk system, they become the provider and assume all provider obligations.
Show Answer
Article 4 requires ALL providers and deployers to ensure sufficient AI literacy among staff and any persons dealing with the operation and use of AI systems on their behalf. It is not limited to technical staff. The required level is context-dependent, considering technical knowledge, experience, education, context of use, and affected persons. It took effect February 2, 2025 — one of the earliest obligations.
Show Answer
Three tiers: (1) Prohibited practices: up to 35M EUR or 7% global annual turnover; (2) High-risk/GPAI violations: up to 15M EUR or 3%; (3) Incorrect information to authorities: up to 7.5M EUR or 1%. The HIGHER of the fixed amount or percentage applies. For group companies, 'global annual turnover' refers to the entire group's worldwide turnover. SMEs receive proportionate lower caps.
Show Answer
Open-source GPAI models receive partial exemptions: they are not required to provide full technical documentation or a detailed training data summary. However, they MUST still comply with EU copyright law and prohibited practice rules. Critically, if an open-source model poses systemic risk (>10^25 FLOPs or AI Office designation), ALL exemptions are removed and full systemic risk obligations apply.