EU AI Act High-Risk AI Obligations: A Jurist’s Guide

Dr. Raphael Nagel (LL.M.) on EU AI Act high-risk AI obligations — Tactical Management
Dr. Raphael Nagel (LL.M.)
Aus dem Werk · MASCHINENRECHT

EU AI Act High-Risk AI Obligations: What Providers and Deployers Must Deliver by August 2026

Under the EU AI Act, high-risk AI obligations impose seven cumulative duties on providers and deployers: risk management, data governance, technical documentation, logging, transparency, human oversight, and accuracy with robustness and cybersecurity. Non-compliance triggers fines up to 35 million euros or 7 percent of worldwide turnover, with the full high-risk regime applicable from 2 August 2026.

EU AI Act high-risk AI obligations is the cumulative compliance regime imposed by Regulation (EU) 2024/1689 on AI systems deployed in critical infrastructure, credit scoring, employment, education, law enforcement, migration, administration of justice, and democratic processes. Providers must implement a documented risk management system, ensure high-quality datasets, maintain technical documentation and automatic logging, guarantee transparency toward deployers, enable effective human oversight by design, and demonstrate accuracy, robustness and cybersecurity. Deployers carry parallel duties covering purpose-limited use, oversight assignment, incident reporting, and in certain contexts fundamental rights impact assessment. Dr. Raphael Nagel (LL.M.) treats this architecture in MASCHINENRECHT as ex ante attribution infrastructure rather than bureaucratic overhead.

Which AI systems fall within the EU AI Act high-risk category?

The AI Act classifies a system as high-risk when it appears in Annex III or operates as a safety component of a product governed by Union harmonization legislation such as the Medical Devices Regulation. Annex III enumerates eight domains: biometrics, critical infrastructure, education, employment, essential public and private services, law enforcement, migration, and administration of justice.

Concrete examples densely populate this perimeter. An AI scoring the creditworthiness of a natural person is high-risk by virtue of Annex III point 5(b). Recruitment and personnel filtering sit under point 4, the category that would have captured the Amazon recruiting tool retired in 2018 after it systematically downranked female candidates because historical hiring data was male dominated. AI for law enforcement risk assessment falls under point 6, the European cousin of the COMPAS system investigated by ProPublica in 2016. Triage AI in emergency care touches both Annex III point 5(a) on essential services and the Medical Devices Regulation, producing dual classification under two regimes at once.

Classification hinges on function, not label. An AI drafting internal meeting summaries is not high-risk. The same underlying foundation model, wired into a clinical triage workflow or a social benefit allocation engine, is. MASCHINENRECHT, the book by Dr. Raphael Nagel (LL.M.), makes this point with surgical precision: the decisive question is never what the system is called, but which decisions it structures and which fundamental rights it touches. Enterprises building an AI inventory must therefore classify per concrete use case, not per product family or per vendor.

What are the seven pillars of provider obligations under the AI Act?

Providers of high-risk AI must satisfy seven cumulative obligation clusters codified in Articles 9 through 15 of Regulation (EU) 2024/1689: lifecycle risk management, data and data governance, technical documentation, automatic event logging, transparency and information for deployers, human oversight enabled by design, and demonstrable accuracy, robustness and cybersecurity. Each pillar is independently enforceable.

The risk management system under Article 9 is not a one-off document. It must be iterated across the entire lifecycle, covering training, testing, deployment, and post-market behaviour. Article 10 demands that training, validation, and testing data be relevant, representative, and, to the extent possible, free of errors and biases. The Amazon 2018 failure is the textbook illustration of what happens when historical data encodes structural discrimination: the model does not invent bias, it reproduces it with statistical authority, and a diligent provider would have identified this risk through pre-deployment audits on protected attributes.

Articles 11 through 13 convert engineering practice into legal duty. Technical documentation must be complete before placing on the market and kept updated throughout the lifecycle. Automatic logs must permit traceability of events for the whole operational lifetime. Transparency toward the deployer is not a marketing gesture but a codified information obligation covering performance, limitations, and known failure modes. Article 14 operationalizes human oversight, requiring that natural persons can effectively monitor, understand, intervene in, override, and shut down the system. Article 15 closes the circle with quantitative thresholds on accuracy, robustness, and cybersecurity, including resilience against adversarial manipulation of inputs.

What obligations does the AI Act impose on deployers of high-risk AI?

Deployers, formerly called users in earlier AI Act drafts, carry a parallel but distinct duty stack under Article 26. They must use the system in line with provider instructions, assign human oversight to competent personnel with sufficient authority, monitor operation, preserve logs for at least six months, notify providers and authorities of serious incidents, and, in public sector and essential services contexts, conduct a fundamental rights impact assessment.

This framing reflects the analysis Dr. Raphael Nagel (LL.M.) develops across Chapter 8 of MASCHINENRECHT: the deployer is the contextual architect of algorithmic risk. The same foundation model is safe in one clinical pathway and catastrophic in another. The AI Act concedes this by refusing to let deployers outsource oversight upstream to the provider chain. A hospital deploying radiology AI cannot discharge its obligations by pointing to CE marking, nor can a bank launching automated credit scoring by invoking vendor certifications.

Article 27 adds the fundamental rights impact assessment for public bodies and for private entities providing public services such as credit institutions and life and health insurers. This assessment must precede deployment and identify, at minimum, the processes in which the system will be used, the period and frequency of use, the categories of natural persons affected, the specific risks of harm, the human oversight measures, and the remedies available internally if those risks materialize. Failure to conduct this assessment is itself an autonomous breach carrying the middle-tier fine of up to 15 million euros or 3 percent of turnover.

How does conformity assessment and CE marking work for high-risk AI?

Before placing a high-risk AI system on the Union market, the provider must complete a conformity assessment procedure. For most Annex III categories, Article 43 permits internal control, meaning self-assessment against harmonized standards. For remote biometric identification and certain product-integrated AI under sectoral legislation, a notified body must intervene. A successful assessment produces the EU declaration of conformity and the CE marking.

Self-assessment is faster and less expensive, but it transfers every gap in internal expertise onto the provider alone. An incomplete or superficial self-assessment becomes a liability trap: in a later damages claim before a civil court, the missing documentation is direct evidence of breach of the duty of care. Notified body certification, while costlier and slower, externalizes validation and materially strengthens the evidentiary position of the provider in litigation, in insurance underwriting, and in institutional procurement.

Substantial modifications restart the assessment clock. The revised Product Liability Directive, adopted in October 2024 as Directive (EU) 2024/2853, treats material updates as equivalent to placing a new product on the market. A provider pushing a model update that materially changes the risk profile, the intended purpose, or the performance envelope must therefore re-assess conformity and may trigger a fresh liability window running for a new decade. MASCHINENRECHT argues that change management is, in this configuration, a liability discipline rather than a product management task.

What sanctions and enforcement architecture back the high-risk regime?

The AI Act operates a three-tier administrative sanctions structure in Article 99. Prohibited practices under Article 5 attract fines up to 35 million euros or 7 percent of worldwide annual turnover, whichever is higher. Breach of high-risk obligations on providers and deployers triggers up to 15 million euros or 3 percent. Supplying incorrect, incomplete, or misleading information to authorities carries up to 7.5 million euros or 1 percent.

These figures understate the true economic cost. Direct fines compete with market access restrictions, mandatory withdrawal or recall of non-conforming systems, downstream civil damages under the revised Product Liability Directive, and reputational collapse in regulated verticals. Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, emphasizes in MASCHINENRECHT that the indirect consequences typically exceed the administrative fine by a multiple. Insurers, rating agencies such as Moody’s and S&P, and institutional investors increasingly price AI governance maturity directly into the cost of capital.

The timeline of enforcement matters as much as the quantum. The AI Act entered into force on 1 August 2024. Prohibited practices applied from 2 February 2025. The general purpose AI regime and governance architecture activated on 2 August 2025. Full high-risk obligations apply from 2 August 2026, with certain product-integrated systems on a longer transitional path to 2027. National market surveillance authorities, in Germany likely the Bundesnetzagentur for selected sectors, coordinate enforcement. Fragmented national enforcement, already familiar under the GDPR, remains a known structural risk for cross-border providers.

The EU AI Act high-risk regime is the closest thing Europe has produced to a constitution of algorithmic accountability. It refuses the comfortable fiction that AI is a mere tool. It assigns duties along the entire value chain, with providers bearing design responsibility, deployers carrying contextual responsibility, and conformity assessment functioning as the gatekeeper to the Union market. Enterprises that read Articles 9 through 15 as a compliance checklist miss the point. Those articles are the skeleton of a new market architecture in which documentation, logging, and effective human oversight become the decisive proof of lawful operation in every downstream civil claim. Dr. Raphael Nagel (LL.M.) argues throughout MASCHINENRECHT that liability in the AI economy is not a brake on innovation but its infrastructure. The companies that will scale into regulated verticals, that will secure institutional capital from European pension funds and insurers, that will obtain insurable status for algorithmic risk, are those that treat Article 9 risk management, Article 14 oversight, and Article 43 conformity assessment as strategic assets rather than bureaucratic overhead. Tactical Management advises clients on precisely this strategic repositioning: converting AI Act obligations into defensible governance architecture that doubles as evidentiary shield. The firms that understand this before 2 August 2026 will not merely survive the transition. They will be the ones setting the operational standards that the rest of the European market must copy.

Frequently asked

When do high-risk obligations under the EU AI Act become applicable?

The AI Act entered into force on 1 August 2024. Prohibited practices under Article 5 applied from 2 February 2025. Obligations for general purpose AI models and the governance architecture activated on 2 August 2025. The full high-risk regime under Articles 6 to 27, including provider duties, deployer duties, and conformity assessment, becomes applicable on 2 August 2026. Certain product-integrated high-risk AI governed by sectoral harmonization legislation has a longer transitional period extending into 2027.

What is the difference between provider and deployer obligations?

The provider designs, develops, and places the high-risk AI system on the Union market. Provider duties under Articles 9 to 15 cover risk management, data governance, technical documentation, logging, transparency, oversight design, accuracy, robustness, and cybersecurity. The deployer operates the system under its own authority. Deployer duties under Article 26 cover use in line with provider instructions, assignment of qualified human oversight, input data monitoring, incident reporting, and, in public bodies and essential service providers, fundamental rights impact assessment. Both tiers are simultaneously and independently enforceable.

How are fines under the AI Act calculated?

Fines scale with the severity of the breach and the size of the undertaking. Breach of the prohibitions in Article 5 reaches 35 million euros or 7 percent of worldwide annual turnover, whichever is higher. Breach of provider or deployer obligations for high-risk AI reaches 15 million euros or 3 percent. Providing incorrect, incomplete, or misleading information to competent authorities reaches 7.5 million euros or 1 percent. Small and medium enterprises, including startups, face proportionally lower caps under Article 99(6) of the Regulation.

Does CE marking under the AI Act shield a provider from civil liability?

No. CE marking confirms that the conformity assessment procedure was completed and that the provider declares compliance with the AI Act. It is evidentiarily useful in later civil litigation, but it is not a liability shield. The revised Product Liability Directive 2024/2853 allows claimants to rely on presumptions of defect and causation in technically complex cases. A CE marked system that causes harm can still trigger damages claims, and the provider retains the burden of rebutting the statutory presumptions.

Are general purpose AI models automatically classified as high-risk?

No. The AI Act distinguishes between general purpose AI models and high-risk AI systems. GPAI models, including large foundation models, are regulated under Articles 51 to 56 with transparency, documentation, copyright compliance, and, for models with systemic risk, additional evaluation duties. A GPAI model becomes part of a high-risk AI system when it is integrated into an Annex III use case or into a regulated product. In that configuration, GPAI duties and full high-risk obligations apply cumulatively to the respective actors in the value chain.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About