The 2 August 2026 deadline is the single most consequential date in the AI Act's phased rollout. On that day, all obligations for high-risk AI systems become enforceable — including risk management, data governance, technical documentation, human oversight and conformity assessment. Providers that have not completed these requirements face administrative fines of up to 15 million euros or 3% of global annual turnover.
This checklist organises the compliance work into five sequential phases. Each phase builds on the output of the previous one, and each maps to specific articles of the regulation. For organisations starting now, the realistic minimum timeline is six months — which means every week of delay compresses the available runway further.
Applicable regulation: Regulation (EU) 2024/1689 (EU AI Act)
Deadline for high-risk obligations: 2 August 2026
Maximum fine (prohibited practices): 35 million EUR or 7% of global turnover
Fine for high-risk non-compliance: up to 15 million EUR or 3% of turnover
Annex III high-risk categories: 8 areas (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)
Estimated organisations affected: approximately 15,000 across the EU
AI System Inventory and Risk Classification
Before any compliance work can begin, organisations must know exactly which AI systems they operate, procure or plan to deploy. This phase maps every system against Article 6 and Annex III.
- Compile a complete register of all AI systems in use, under development, or procured from third parties
- Classify each system against the eight Annex III high-risk areas (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)
- Identify AI components embedded as safety components in products covered by Annex I Union harmonisation legislation
- Evaluate whether any systems fall under the Article 6(3) exemptions for narrow procedural or preparatory tasks — and document the justification
- Register all high-risk systems in the EU database as required by Article 49
- Flag any systems that may constitute prohibited practices under Article 5 (social scoring, real-time remote biometric identification without exemption, emotion recognition in workplaces or education)
Conformity Assessment Path
The conformity assessment determines whether a high-risk system may legally be placed on the EU market. Article 43 defines two routes depending on the system type.
- Determine whether each high-risk system qualifies for internal conformity assessment (Article 43(2)) or requires a notified body (Article 43(1))
- For biometric identification systems in publicly accessible spaces: engage a notified body early — capacity is limited
- For Annex I safety-component systems: integrate AI Act conformity into the existing product conformity procedure
- Draft the EU declaration of conformity per Article 47
- Prepare CE marking documentation where applicable (Article 48)
- Assign a named compliance owner within the organisation for each high-risk system
Technical Documentation
Articles 11 and 10, together with Annex IV, specify the documentation and data governance requirements. This is typically the most labour-intensive phase.
- Draft the Annex IV technical dossier: general system description, intended purpose, development process, algorithms used, hardware and software specifications
- Document data governance practices per Article 10: training data selection, relevance, representativeness, error correction, bias examination
- Record the characteristics of training, validation and testing datasets including statistical properties, known gaps, and potential biases
- Document accuracy, robustness and cybersecurity metrics with validation methodology (Article 15)
- Write instructions for use per Article 13: intended purpose, limitations, human oversight measures, accuracy levels, foreseeable misuse scenarios
- Ensure logging capabilities are implemented and documented per Article 12 — logs must be retained for at least six months
Quality Management System
Article 17 requires providers to establish a quality management system (QMS) that ensures ongoing compliance. This is not a one-off deliverable — it must be maintained throughout the system's lifecycle.
- Establish a documented quality management system covering all AI Act obligations (Article 17)
- Define procedures for design, development and testing under controlled conditions
- Implement version control and change management for models, training data and system components
- Set up procedures for third-party component management — especially upstream GPAI model dependencies (Article 53)
- Create a complaints handling and incident reporting process
- Assign roles and responsibilities for QMS maintenance, including periodic internal audits
Post-Market Monitoring and Incident Reporting
Compliance does not end at market placement. Article 72 requires ongoing monitoring, and Article 73 mandates incident reporting to market surveillance authorities.
- Establish a post-market monitoring system proportionate to the nature and risks of the AI system (Article 72)
- Define metrics and thresholds for detecting performance degradation, bias drift and safety incidents
- Implement an incident reporting process: serious incidents must be reported to the relevant market surveillance authority within 15 days under Article 73
- Set up human oversight mechanisms per Article 14: operators must be able to understand, interpret, override and intervene in real time
- Plan for substantial modifications — any significant change triggers a new conformity assessment (Article 43(4))
- Schedule periodic reviews of the risk management system to incorporate new hazards, user feedback and operational data (Article 9(2))
2 February 2025 — Prohibited AI practices banned (Article 5)
2 August 2025 — General-purpose AI (GPAI) model obligations apply
2 August 2026 — High-risk AI system obligations apply in full
2 August 2027 — Extended transition for AI in products regulated under Annex I (medical devices, machinery, etc.)
Frequently asked questions
Do I need a notified body for my conformity assessment?
Most high-risk AI systems listed in Annex III can use an internal conformity assessment procedure under Article 43(2). The exception is real-time and post remote biometric identification systems used in publicly accessible spaces — these require third-party assessment by a notified body under Article 43(1). If your AI system is a safety component of a product covered by Annex I (e.g. medical devices, machinery), the AI Act conformity assessment is integrated into the existing product conformity procedure.
What documentation does Annex IV require?
Annex IV requires a comprehensive technical dossier covering: a general description of the AI system and its intended purpose; detailed description of the elements and development process; information about monitoring, functioning and control; a description of the risk management system; a description of any change made throughout the lifecycle; the data governance measures and the characteristics of training, validation and testing datasets; the metrics used to measure accuracy, robustness and cybersecurity; the human oversight measures; and a detailed description of the system's logging capabilities.
What happens if I miss the 2 August 2026 deadline?
Placing a non-compliant high-risk AI system on the EU market after 2 August 2026 exposes providers to administrative fines of up to 15 million euros or 3% of worldwide annual turnover under Article 99(3). Market surveillance authorities can also order the withdrawal or recall of the system under Article 79. For AI systems already on the market before the deadline, Article 111(2) provides that existing systems must be brought into compliance when significantly modified. However, new placements, new deployments and new versions must be fully compliant from day one.