Who Is Liable When AI Makes Decisions | Dr. Raphael Nagel

Dr. Raphael Nagel (LL.M.), Founding Partner Tactical Management, on Who is liable when AI makes decisions
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · MASCHINENRECHT

Who Is Liable When AI Makes Decisions: The Four-Role Liability Chain Under the EU AI Act

Under the European framework analyzed by Dr. Raphael Nagel (LL.M.) in MASCHINENRECHT, liability when AI makes decisions is distributed across four roles: manufacturer, integrator, operator and user. The AI Act and the 2024 Product Liability Directive shift the burden of proof, treating documentation failures and missing human oversight as independent grounds for attribution.

Who is liable when AI makes decisions is a legal question answered through a four-role allocation rather than a single defendant. European law, through the AI Act of 2024, the revised Product Liability Directive, and sector regimes such as DORA and the Medical Device Regulation, assigns concurrent duties to the manufacturer of the model, the integrator who embeds it, the operator or deployer who runs it in real processes, and the user who executes the output. As Dr. Raphael Nagel (LL.M.) argues, the old triad of product, producer and user has collapsed; liability now tracks the actual decision architecture, not mere visibility in the workflow.

Why classical attribution collapses when an AI system decides

Classical attribution collapses because every functioning legal order presupposes that a natural or legal subject can be named behind each consequential decision. When an AI system prioritizes cases, assigns scores or triggers alarms, causation fragments across developers, datasets, integrators, operators and end users, leaving injury real but defendants diffuse.

Three documented cases expose the mechanism. Australia’s Robodebt programme, running from 2016 to 2019, automatically compared tax and welfare data and issued hundreds of thousands of debt notices without caseworker review; a Royal Commission later found the programme unlawful from inception. The Dutch Toeslagenaffaire implicated tens of thousands of families, many with migration background, in false fraud classifications and forced the Rutte cabinet’s resignation in 2021. Amazon withdrew its internal recruiting tool in 2018 after engineers confirmed the model systematically downgraded female applicants by reproducing a decade of male-dominated hiring records.

The core diagnosis in MASCHINENRECHT is that the problem is not machine intelligence but organized irresponsibility. When a system fails, each party points to the next link. ProPublica’s 2016 analysis of the COMPAS recidivism tool showed US judges relying on opaque risk scores while believing themselves to act on objective data. This is not a defect in any single actor’s conduct. It is the architecture of attribution itself breaking under adaptive, probabilistic systems that the classical codes of negligence, product liability and professional duty were never built to govern.

The four-role liability chain: manufacturer, integrator, operator, user

The liability chain under MASCHINENRECHT allocates responsibility across four distinct roles. The manufacturer answers for model architecture and training data. The integrator answers for thresholds, cascades and context transfer. The operator, called deployer under the AI Act, answers for governance, validation and effective human oversight. The user answers for evident misuse within that framework.

The manufacturer’s exposure begins before a single customer is onboarded. Data selection, loss functions, robustness thresholds and explainability design are each normative choices with downstream legal consequences. A model trained on historically biased credit records does not merely inherit statistics; it codifies discrimination as production logic. The revised Product Liability Directive of 2024 treats such software as a product, meaning that defects in learned behaviour, in documentation or in post-market monitoring can trigger strict liability. Article 11 of the AI Act obliges providers of high-risk systems to maintain technical documentation and logging sufficient to reconstruct any consequential output.

The operator is the true hotspot. A hospital running a radiology AI, a bank deploying a credit-scoring engine, or a public authority issuing automated decisions under Article 22 GDPR translates abstract model behaviour into concrete harm. Article 35a of the German Verwaltungsverfahrensgesetz permits fully automated administrative acts only where human review on request is substantive. The integrator, assembling APIs, thresholds and escalation logic, produces cascade errors routinely traceable to integration choices rather than base-model defects. The user, finally, is rarely sovereign; they are the last human surface of an already structured decision.

Why the tool fiction no longer holds under European law

The tool fiction treats AI as an extension of human will, but European law has moved decisively beyond it. The AI Act treats high-risk systems as autonomous risk sources requiring conformity assessment, logging and human oversight. The 2024 Product Liability Directive treats AI software as a product with its own defect regime independent of operator conduct.

Tool logic presumes that the human in the loop retains time, information, competence, institutional backing and genuine override power. Research on automation bias in aviation, radiology and banking compliance shows the opposite: operators follow system outputs even when their prior judgement diverged, because deviation carries personal cost while the system does not. In high-frequency trading, the May 6, 2010 Flash Crash erased nearly 1,000 Dow Jones points within minutes through algorithmic feedback that no human trader could interrupt. Such time scales are constitutively incompatible with meaningful oversight.

The AI Act places duties on both providers and deployers of high-risk systems. Foundation models, addressed as General Purpose AI, carry transparency and documentation obligations at the source. Violations of the prohibited-practices regime attract fines up to 35 million euros or 7 percent of worldwide annual turnover. Dr. Raphael Nagel (LL.M.) argues in MASCHINENRECHT that the legislator has implicitly abandoned the tool fiction by treating these systems as independent risk architectures. A contract with a manufacturer may facilitate recourse, but it cannot extinguish primary liability toward injured third parties.

How burden of proof reversal changes the litigation calculus

The revised Product Liability Directive allows courts to presume defect and causation for technically complex products when the claimant presents plausible indicators and the defendant cannot provide a satisfactory explanation. For AI defendants, documentation, logging and post-market monitoring migrate from back-office overhead to the first line of litigation defense.

Before 2024, claimants in European product-liability actions bore the full evidentiary burden against defendants who alone possessed the logs, training sets and model versions. That asymmetry is now reversed in defined situations. The provisions on technically complex products allow judges to treat missing or inadequate documentation as an independent indicator of defect. DORA, applicable since January 2025, requires financial institutions to run adversarial resilience testing on AI-dependent processes and to manage third-party ICT risk explicitly. The Medical Device Regulation subjects diagnostic AI to dual scrutiny, layering MDR conformity on top of the AI Act. Opacity no longer protects the defendant; it condemns them.

The Dutch childcare benefits scandal illustrates the evidentiary dynamic. Affected families for years could not obtain the risk-profile logic used by the tax authority; once forced disclosure began, liability cascaded from caseworkers to ministers and ultimately to the resigning cabinet. Under the new rules, comparable plaintiffs across the EU will not need to deconstruct the model; they will need only to establish plausibility of harm and absence of substantive explanation. Insurers including Munich Re and Swiss Re are pricing AI-governance maturity directly into underwriting for algorithmic-liability coverage.

Liability as the competitive infrastructure of the AI economy

In the analysis of Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, liability has become the selection mechanism of the AI economy. Enterprises that organize, document, insure and anticipate attribution prevail not despite liability but because of their liability capacity. The others break on the cost of their own vagueness.

Six dimensions drive this inversion. Regulatory certainty reduces reactive spend. Litigation defense becomes feasible only with documented governance. Insurance pricing incorporates governance maturity as a direct underwriting variable. Institutional investors applying ESG due diligence treat AI-governance reporting as a material input. Public-sector procurement in regulated markets will exclude non-compliant providers once AI Act high-risk duties fully apply in August 2026. Reputation is the quiet but largest variable; credible crisis communication is impossible without a pre-existing incident-response protocol.

MASCHINENRECHT argues that the industrial era was production, the digital era was information, and the next phase is attribution. The firms that master attribution control capital access, insurability and political legitimacy. Supervisory boards that still treat AI governance as a compliance cost are mispricing their own risk. Founding Partners, general counsel and chief risk officers now face a simple strategic test: documented liability resilience becomes a first-order asset by 2026, or the enterprise enters the post-AI-Act market structurally disadvantaged.

Who is liable when AI makes decisions is no longer an academic question. It is the operational test that determines whether a European enterprise remains investable, insurable and regulatorily viable through the full application of the AI Act in August 2026 and beyond. The four-role chain, the reversed burden of proof under the 2024 Product Liability Directive and the sectoral layers of DORA and the Medical Device Regulation converge on a single conclusion: liability is not downstream of innovation. It is the scaffolding on which innovation now stands. Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, develops in MASCHINENRECHT the thesis that organized irresponsibility is the real adversary, not algorithmic intelligence, and that precise attribution is the civilizational task of the decade. Boards, counsel and investors who treat attribution as architecture, not as overhead, will compound advantage. Those who wait for a catalyst case to force the redesign will find that by then the underwriting terms, the precedents and the market have moved without them.

Frequently asked

Who is primarily liable when an AI system causes damage under EU law?

Under the framework analyzed in MASCHINENRECHT, primary liability attaches to whichever role in the chain of manufacturer, integrator, operator or user exercises actual decision power over the harm-producing function. The AI Act and the 2024 Product Liability Directive combine to treat the operator as the most frequently reachable defendant, with joint and several recourse against manufacturers and integrators.

Does a human in the loop shield an organization from AI liability?

No. Dr. Raphael Nagel (LL.M.) argues that nominal human oversight without time, information, competence, institutional backing and real override authority is a liability trap rather than a defense. Courts and regulators increasingly look behind the last signature to assess whether the operator produced real control or only legitimation. Automation bias has become litigation-relevant evidence.

How does the 2024 Product Liability Directive change the burden of proof for AI claimants?

The directive permits courts to presume defect and causation for technically complex products when the claimant shows plausible indicators and the defendant fails to provide a satisfactory explanation. This reversal transfers the evidentiary burden on opaque AI systems from the injured party to the entity that controlled the documentation, logs and model versions, transforming documentation quality into a first-order strategic asset.

Are operators liable even when using a certified AI system?

Yes. Conformity assessment under the AI Act is a relevant indicator of care but not an absolute defense. Operators retain primary liability for context selection, validation on their own population, human-oversight architecture and post-market monitoring. Article 35a of the German Verwaltungsverfahrensgesetz and Article 22 GDPR further impose specific obligations on public-sector and consumer-facing deployers, which no supplier certification can neutralize.

What role does insurance play in AI liability allocation?

Insurers have become de facto co-regulators. Munich Re, Swiss Re and specialist carriers now condition algorithmic-liability coverage on documented governance, auditability and post-market monitoring. What is not insurable does not scale. Dr. Raphael Nagel (LL.M.) treats this underwriting pressure as part of the market architecture of the AI economy and a central theme of MASCHINENRECHT, with direct consequences for capital access and market positioning.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About