⏰ EU AI Act — High-Risk obligations apply from 2 August 2026

EU AI Act Compliance Guide

EU AI Act Consulting — What High-Risk AI Providers Need Before 2 August 2026

The EU Artificial Intelligence Act (Regulation 2024/1689) sets the first horizontal rules for AI worldwide. From 2 August 2026, providers and deployers of high-risk systems must meet eight concrete obligations — or face fines of up to 35 million euros.

Published 15 April 2026 · Last updated: April 2026

The EU Artificial Intelligence Act entered into force on 1 August 2024 and is being phased in over three years. Prohibited practices have been banned since 2 February 2025. Obligations for general-purpose AI models took effect on 2 August 2025. The next — and by far the broadest — milestone arrives on 2 August 2026, when the requirements for high-risk AI systems become binding for every provider and deployer placing such systems on the Union market.

For most organisations this is not a paperwork exercise. Building a compliant risk management system, producing the technical documentation required by Annex IV, setting up post-market monitoring, and running a conformity assessment typically takes 9 to 12 months of focused work. Companies that have not yet started preparing now have less than four months of usable runway before the deadline.

Key facts about the EU AI Act

Legal basis: Regulation (EU) 2024/1689, entered into force 1 August 2024

High-risk obligations apply from: 2 August 2026

General-purpose AI rules apply from: 2 August 2025

Maximum fine: 35 million euros or 7% of worldwide annual turnover

Fine for high-risk non-compliance: up to 15 million euros or 3% of turnover

Eight high-risk areas listed in Annex III (employment, credit, education, law enforcement, migration, critical infrastructure, biometrics, essential services)

What qualifies as a high-risk AI system

Annex III of the AI Act lists eight areas in which an AI system is presumed high-risk. These include biometric identification, critical infrastructure management, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and the administration of justice.

A second pathway to high-risk status runs through Annex I: AI components used as safety components in products already regulated by existing EU harmonisation law, such as machinery, toys, medical devices, lifts or radio equipment. Here the AI Act obligations layer on top of the existing CE marking regime.

Not every AI system used in these areas automatically qualifies. Article 6(3) introduces a filter: systems that perform narrow procedural tasks, improve the result of a human activity, detect decision patterns without replacing human assessment, or prepare an assessment relevant for the listed use cases can be exempted — provided the provider documents the reasoning and registers the exemption.

The eight obligations for high-risk providers

Risk management system (Article 9): a continuous, iterative process to identify, estimate, evaluate and mitigate the risks an AI system poses to health, safety and fundamental rights.

Data and data governance (Article 10): training, validation and testing data must be relevant, representative, free of errors and complete. Data governance practices must address bias, data gaps and the appropriateness of the data for the intended purpose.

Technical documentation (Article 11, Annex IV): a complete dossier containing a general description of the system, its intended purpose, the hardware it runs on, the algorithms used, the training data, the validation and testing metrics, the risk management system and the instructions for use.

Record-keeping (Article 12): high-risk systems must automatically log events to ensure a level of traceability appropriate to the intended purpose, and those logs must be kept for a minimum of six months.

Transparency and information to deployers (Article 13): instructions for use must describe the system's intended purpose, the level of accuracy, robustness and cybersecurity it was validated against, its known limitations, and the measures the deployer must take to ensure human oversight.

Human oversight (Article 14): oversight measures must enable a human to understand the system's capacities and limitations, interpret its output correctly, decide not to use it, override it, or intervene in its operation.

Accuracy, robustness and cybersecurity (Article 15): systems must perform consistently throughout their lifecycle and be resilient against errors, faults and attempts to alter their use or performance through adversarial inputs.

Conformity assessment and CE marking (Articles 43, 48): before being placed on the market, the system must pass a conformity assessment. For most Annex III systems this is an internal procedure; biometric identification systems require a notified body.

What good EU AI Act consulting covers

Consulting engagements on the AI Act typically begin with an inventory and classification sprint. Every AI component already in use or on the roadmap is mapped against Article 6 and Annex III to determine whether it is prohibited, high-risk, subject to transparency obligations only, or out of scope.

The next phase is a gap assessment against the eight obligations. This usually surfaces missing documentation, undocumented training datasets, unclear accountability for model updates, and oversight procedures that exist on paper but not in practice. A realistic remediation plan assigns each gap to a named owner with a deadline tied to the 2 August 2026 milestone.

The third phase is implementation: drafting the Annex IV technical documentation, setting up the logging infrastructure, writing instructions for use, and preparing for the conformity assessment. Where the provider relies on a general-purpose AI model from a third party, the consultant also verifies that the upstream provider delivers the information required by Article 53.

Implementation timeline

2 February 2025 — Ban on prohibited AI practices (Article 5)

2 August 2025 — GPAI model obligations apply

2 August 2026 — High-risk obligations apply in full

2 August 2027 — Extended transition for products covered by Annex I (e.g. medical devices, machinery)

Now: inventory AI use cases, classify risk, start technical documentation

How to choose an AI Act consultant

A credible EU AI Act consultant combines three things: fluency in the regulation's text (including the delegated acts and the harmonised standards being developed by CEN-CENELEC JTC 21), hands-on experience with machine learning engineering, and familiarity with existing product-safety conformity assessment procedures. Pure legal advisors without engineering depth tend to produce paperwork that fails a technical audit; pure engineers without regulatory depth tend to miss documentation requirements that become fatal at the conformity assessment stage.

The market is young and uneven. When evaluating providers, ask for a sample Annex IV technical documentation redacted from a real engagement, ask how they handle GPAI upstream dependencies, and ask which harmonised standards they are tracking. Consultants who cannot answer these three questions concretely are not yet ready to carry a high-risk project to the 2 August 2026 deadline.

Frequently asked questions

What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive, horizontal AI regulation in the world. It entered into force on 1 August 2024 and classifies AI systems into four risk tiers: unacceptable risk (banned), high risk (strictly regulated), limited risk (transparency obligations) and minimal risk (unregulated). It applies to providers placing AI systems on the EU market, to deployers using them in the EU, and, in certain cases, to providers outside the EU whose output is used in the Union.

Which deadlines apply in 2026?

The main deadline in 2026 is 2 August, when the obligations for high-risk AI systems become fully applicable. From this date, providers of high-risk systems must have a risk management system, technical documentation, data governance, logging, transparency measures, human oversight, accuracy and cybersecurity controls in place, and must have passed a conformity assessment. A further transition until 2 August 2027 applies for AI systems that are safety components of products regulated under Annex I.

What is a high-risk AI system?

A high-risk AI system is one that falls into an area listed in Annex III of the AI Act (for example biometric identification, critical infrastructure, education, employment decisions, access to essential services, law enforcement, migration and justice) or is used as a safety component in a product covered by Annex I Union harmonisation law. Article 6(3) allows exceptions for narrow procedural or preparatory systems, but providers must document the justification and register the exemption in the EU database.

What fines apply for non-compliance?

The AI Act sets three tiers of administrative fines. Violating the prohibitions in Article 5 can trigger fines of up to 35 million euros or 7% of the provider's worldwide annual turnover, whichever is higher. Non-compliance with high-risk obligations or transparency obligations can reach 15 million euros or 3% of turnover. Providing incorrect, incomplete or misleading information to authorities can reach 7.5 million euros or 1% of turnover. For SMEs and start-ups, the lower of the two values applies.

What does EU AI Act consulting cover?

A typical consulting scope covers four phases: first, an inventory and risk classification of all AI systems in use; second, a gap assessment against the eight obligations for high-risk systems; third, remediation — drafting technical documentation under Annex IV, setting up logging, designing human oversight, and verifying data governance; and fourth, preparation for the conformity assessment, including EU declaration of conformity, CE marking where applicable, and registration in the EU database.

How long does AI Act compliance take?

For a provider with a single high-risk system and mature MLOps practices already in place, a realistic timeline is 6 to 9 months. For organisations with multiple systems, fragmented data governance or no prior experience with product-safety conformity assessments, the realistic range is 9 to 15 months. Given the 2 August 2026 deadline, organisations that have not started yet are already in the compressed zone and should prioritise classification and gap assessment before any remediation work.