⏰ EU AI Act — High-Risk obligations apply from 2 August 2026

Reference

EU AI Act Glossary — Key Terms and Definitions

28 essential terms from Regulation (EU) 2024/1689 explained in plain language, with references to the specific articles where each term is defined or used.

Published 16 April 2026 · Last updated: April 2026

The EU AI Act introduces a dense regulatory vocabulary. Many terms — "provider," "deployer," "high-risk AI system," "substantial modification" — carry precise legal meanings that differ from their everyday use. Misunderstanding a single definition can lead to incorrect risk classification, missed obligations or flawed conformity assessments. This glossary defines the 28 most important terms, cites the relevant articles, and explains each in the context of practical compliance work.

A

AI System

Article 3(1)

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. This definition is aligned with the OECD definition and is deliberately broad — it captures traditional ML models, deep learning systems, and hybrid approaches.

Algorithmic Impact Assessment

Related: Articles 9, 27

While the AI Act does not use this exact term, the concept is embedded in the risk management system (Art. 9) and the fundamental rights impact assessment required of deployers of high-risk systems in the public sector (Art. 27). An algorithmic impact assessment evaluates the potential effects of an AI system on individuals and groups, covering discrimination, privacy, autonomy and due process. It is a prerequisite for responsible deployment of any high-risk system.

Annex III

Article 6(2)

The annex listing the eight areas in which AI systems are classified as high-risk: biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and administration of justice. A system that falls into any Annex III category must comply with Articles 9 through 15 unless it qualifies for the Article 6(3) exemption.

B

Biometric Identification

Article 3(35)

The automated recognition of physical, physiological or behavioural human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data stored in a database. Remote biometric identification in publicly accessible spaces is one of the most strictly regulated use cases under the AI Act, requiring a notified body for conformity assessment (Art. 43(1)) and subject to the prohibitions in Article 5 when used in real time by law enforcement without judicial authorisation.

C

CE Marking (AI)

Article 48

The marking by which a provider indicates that a high-risk AI system conforms to the requirements of the AI Act. The CE marking must be affixed visibly, legibly and indelibly to the AI system or, where that is not possible, to its packaging or accompanying documentation. For AI systems embedded in products already subject to CE marking under Annex I harmonisation legislation, the AI Act CE marking is integrated into the existing marking process.

Conformity Assessment

Article 43

The process of verifying whether a high-risk AI system meets all applicable requirements before it can be placed on the EU market. Most Annex III high-risk systems can use an internal conformity assessment procedure (Art. 43(2)). Biometric identification systems used in publicly accessible spaces require third-party assessment by a notified body (Art. 43(1)). The assessment must be repeated when a substantial modification is made to the system.

D

Data Governance

Article 10

The set of practices governing training, validation and testing data for high-risk AI systems. Article 10 requires that data be relevant, sufficiently representative, and as free of errors as possible. Providers must examine datasets for biases, identify data gaps, and apply appropriate statistical measures. Data governance is one of the most operationally demanding obligations because it requires retroactive documentation of datasets that were often assembled without regulatory requirements in mind.

Deep Fake

Article 50(4)

AI-generated or manipulated image, audio or video content that appreciably resembles existing persons, objects, places or events and would falsely appear to a person to be authentic. Under the AI Act, deployers of systems that generate deep fakes must disclose that the content has been artificially generated or manipulated. This transparency obligation applies regardless of whether the AI system is classified as high-risk.

Deployer

Article 3(4)

A natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. Deployers of high-risk systems have their own set of obligations under Article 26, including using the system in accordance with the instructions for use, ensuring human oversight, and monitoring the system's operation. Public-sector deployers must also conduct a fundamental rights impact assessment (Art. 27).

F

Fundamental Rights Impact Assessment (FRIA)

Article 27

An assessment that deployers of high-risk AI systems in the public sector must carry out before putting the system into use. The FRIA must identify the specific risks to fundamental rights posed by the AI system in its particular context of use, the measures taken to mitigate those risks, and the governance and human oversight mechanisms in place. The results must be notified to the relevant market surveillance authority.

G

General-Purpose AI (GPAI)

Article 3(63), Articles 51-56

An AI model, including where trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market. GPAI models have their own set of obligations under Articles 51-56, applicable since 2 August 2025, including technical documentation, transparency to downstream providers, and copyright policy compliance. GPAI models with systemic risk face additional requirements including adversarial testing and incident reporting.

H

High-Risk AI System

Article 6, Annex III

An AI system that either falls into one of the eight areas listed in Annex III or is used as a safety component in a product covered by Annex I Union harmonisation legislation. High-risk systems must comply with the full set of obligations in Articles 9 through 15: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, cybersecurity and conformity assessment. This is the category that triggers the most demanding compliance requirements under the AI Act.

Human Oversight

Article 14

Measures that enable natural persons to effectively oversee the functioning of a high-risk AI system while it is in use. Article 14 requires that oversight measures allow humans to fully understand the system's capabilities and limitations, correctly interpret its output, decide not to use it in a particular situation, override or reverse its output, and intervene in or interrupt its operation. The design of human oversight must be proportionate to the risks, the level of autonomy, and the context of use.

M

Market Surveillance

Articles 74-78

The activities carried out by national market surveillance authorities to ensure that AI systems on the EU market comply with the AI Act. Market surveillance authorities have the power to request documentation, access source code under justified conditions, conduct audits, order corrective actions, and impose fines. Each Member State must designate at least one market surveillance authority and one notifying authority. The European AI Office coordinates cross-border enforcement.

N

Notified Body

Articles 28-39

An organisation designated by a Member State to carry out third-party conformity assessments for high-risk AI systems that require them. Under the AI Act, notified bodies are primarily relevant for biometric identification systems in publicly accessible spaces (Art. 43(1)). Notified bodies must be independent, competent, and accredited. The designation process is managed by notifying authorities in each Member State. Capacity is limited — organisations requiring a notified body should engage early.

P

Post-Market Monitoring

Article 72

The system that providers of high-risk AI systems must establish and document to collect, record and analyse data on the performance of their systems throughout their lifetime. Post-market monitoring must be proportionate to the nature and risks of the system. It feeds into the risk management system and must enable the provider to detect any need for corrective or preventive action. Serious incidents must be reported to market surveillance authorities within 15 days under Article 73.

Prohibited AI Practices

Article 5

AI practices that are banned outright in the EU since 2 February 2025. These include social scoring by public authorities, exploitation of vulnerabilities of specific groups, real-time remote biometric identification in publicly accessible spaces by law enforcement without judicial authorisation, emotion recognition in workplaces and educational institutions, untargeted scraping of facial images from the internet, and subliminal manipulation that causes harm. Violations carry the highest fines: up to 35 million EUR or 7% of global turnover.

Provider

Article 3(3)

A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model, or that has an AI system or GPAI model developed, and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge. The provider bears the primary compliance burden under the AI Act, including risk management, technical documentation, conformity assessment and post-market monitoring. A deployer can become a provider if it substantially modifies a high-risk system or puts it on the market under its own name.

R

Real-Time Remote Biometric Identification

Article 5(1)(h), Article 5(2)

The use of an AI system for biometric identification where the capturing of biometric data, the comparison and the identification occur without a significant delay, in real time or near-real time. Real-time remote biometric identification by law enforcement in publicly accessible spaces is prohibited under Article 5 except in three narrowly defined situations: targeted search for specific missing persons or victims, prevention of a specific and imminent terrorist threat, and localisation or identification of a suspect of a serious criminal offence. Each exception requires prior judicial authorisation.

Risk Management System

Article 9

A continuous, iterative process that must be established, implemented, documented and maintained throughout the entire lifecycle of a high-risk AI system. The risk management system must identify and analyse known and reasonably foreseeable risks, estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse, and adopt appropriate risk management measures. It must be updated whenever a significant change occurs or new information about risks becomes available.

S

AI Regulatory Sandboxes

Articles 57-63

Controlled environments established by national competent authorities that allow providers and prospective providers to develop, train, validate and test innovative AI systems under regulatory supervision before placing them on the market. Sandboxes provide legal certainty during the development phase and allow regulators to build expertise. Each Member State must establish at least one sandbox by 2 August 2026. Participation is voluntary and does not exempt participants from compliance with the AI Act upon market placement.

Substantial Modification

Article 3(23)

A change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment and which affects the compliance of the system with the AI Act or modifies the intended purpose for which the system has been assessed. A substantial modification triggers a new conformity assessment. This is particularly relevant for AI systems that are retrained on new data or adapted to new use cases — each such change must be evaluated against the substantial modification threshold.

Systemic Risk

Article 3(65), Article 51

A risk that is specific to the high-impact capabilities of general-purpose AI models, that has a significant effect on the Union market due to its reach, or has actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, and that can be propagated at scale across the value chain. GPAI models classified as posing systemic risk (currently defined as models trained with more than 10^25 FLOPs) face additional obligations including adversarial testing, incident reporting, and model evaluation.

T

Technical Documentation

Article 11, Annex IV

The comprehensive dossier that providers of high-risk AI systems must draw up and maintain. Annex IV specifies the required contents: general description and intended purpose, detailed description of elements and development process, monitoring and control information, risk management system description, data governance measures, dataset characteristics, accuracy and robustness metrics, human oversight measures, and logging capabilities. The documentation must be kept up to date and made available to market surveillance authorities upon request.

Transparency Obligations

Article 13, Article 50

Two distinct sets of transparency requirements exist in the AI Act. Article 13 requires providers of high-risk systems to provide instructions for use that describe the system's intended purpose, limitations, accuracy metrics, and human oversight measures. Article 50 imposes lighter transparency obligations on certain AI systems regardless of risk level: systems that interact with persons must disclose they are AI, emotion recognition systems must inform the person, and deep fake generators must label their output as artificially generated.