AI in Critical Infrastructure and NIS2: Board Duties

Dr. Raphael Nagel (LL.M.) on AI in Critical Infrastructure and NIS2 — Tactical Management
Dr. Raphael Nagel (LL.M.)
Aus dem Werk · ALGORITHMUS

AI in Critical Infrastructure and NIS2: Data Poisoning, Adversarial Attacks, and the New Board Liability

AI in Critical Infrastructure and NIS2 refers to the legal and technical regime governing AI-enabled KRITIS operators under the NIS2 Directive, effective October 2024. It imposes personal board liability, fines up to 10 million euros or 2 percent of global turnover, and mandates defenses against data poisoning, model inversion, and adversarial attacks.

AI in Critical Infrastructure and NIS2 is the intersection of algorithmic control systems in essential sectors, energy, water, health, digital infrastructure, transport, and the expanded NIS2 Directive regime that took effect in October 2024. It defines how operators of essential and important entities must secure AI components against KRITIS-specific threat vectors, including data poisoning, model inversion, and adversarial inputs. Unlike classic cybersecurity regulation, NIS2 attaches personal liability to management bodies, requires comprehensive risk management, and exposes non-compliant operators to sanctions of up to ten million euros or two percent of global annual turnover. Dr. Raphael Nagel (LL.M.) treats this convergence as the central governance question for European infrastructure boards.

Why does NIS2 change the legal position of boards operating AI in critical infrastructure?

NIS2 changes the legal position of boards because it makes management personally liable for cybersecurity implementation in essential and important entities. Fines reach up to 10 million euros or 2 percent of global annual turnover, and the directive, effective from October 2024, extends KRITIS status to sectors previously unregulated at EU level.

The regulatory novelty is not the existence of cybersecurity duties, which already applied under NIS1 and sectoral rules such as the German IT-Sicherheitsgesetz 2.0. The novelty is the direct, personal accountability of Vorstand and Geschäftsführung members. Boards cannot discharge their duty by signing off on an IT strategy paper once a year. They must demonstrate active oversight of risk management, incident reporting within 24 hours of significant events, and documented supply chain security for AI components.

For operators running algorithmic control in energy dispatch, water treatment, hospital logistics, or payment clearing, this means the AI governance function moves from the CIO into the boardroom. Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, treats this as a categorical shift: the moment a Vorstand signs off on an AI system steering a KRITIS process, that signature carries fiduciary weight that survives the next audit cycle.

What are the AI-specific threat vectors that NIS2 implicitly requires operators to defend against?

NIS2 requires defense against AI-specific threat vectors that go beyond classical cyber risks. Three categories matter most: data poisoning, which contaminates training data so the model fails predictably in defined situations; model inversion, which extracts infrastructure intelligence from model outputs; and adversarial attacks, which manipulate inputs so the system misclassifies without the manipulation being visible to human operators.

Data poisoning is the most insidious because it is latent. An energy load management model, poisoned so it responds incorrectly to demand spikes during extreme weather, will behave normally for years until the trigger condition arrives. The resulting blackout cascade looks like technical failure. Forensic attribution is extraordinarily difficult. Model inversion, meanwhile, lets attackers extract network topologies, operating margins, and single points of failure from the outputs of a deployed model, turning the AI itself into a reconnaissance tool.

Adversarial attacks are the most acute for AI-based anomaly detection in industrial control systems. A crafted input can cause the detector to classify a live intrusion as routine operation. No alert is generated. The human operator, to whom nothing was surfaced, concludes that everything is nominal. This is the class of vulnerability that conventional penetration testing, built for deterministic IT environments, is structurally not designed to find.

What does the Sandworm precedent tell us about AI-era KRITIS warfare?

The Sandworm precedent demonstrates that critical infrastructure can be disabled through software alone, without physical weapons or troops. In December 2015, a cyber operation attributed by Western intelligence services to the GRU unit Sandworm used BlackEnergy and Industroyer malware to switch off Ukrainian energy utilities, cutting power to more than 230,000 households in winter.

That attack targeted conventional industrial control systems. The AI-era equivalent is structurally more dangerous because the attack surface grows with every AI component introduced into grid management, predictive maintenance, or load balancing. NotPetya in June 2017, also attributed to Russian state actors, inflicted more than 10 billion dollars in global damage: Maersk lost 47,000 PCs and 1,000 servers; container movements stopped for over a week; Merck and FedEx were hit as collateral damage in a geopolitical operation nominally aimed at Ukraine.

The Colonial Pipeline incident of May 2021 showed that even conventional ransomware, in this case by the DarkSide group, can halt a major fuel artery on the US East Coast for six days and extract a 4.4 million dollar Bitcoin ransom. For European KRITIS boards, the lesson in ALGORITHMUS by Dr. Raphael Nagel (LL.M.) is direct: the threshold for strategic-scale disruption has fallen, and AI both lowers the attacker’s cost of entry and raises the defender’s detection burden.

Which concrete measures must KRITIS operators implement to satisfy NIS2 on AI systems?

KRITIS operators must implement three concrete measures to satisfy NIS2 on AI systems. First, complete network segmentation: AI components in critical infrastructure must not connect directly to the public internet; air-gap configurations or strictly isolated segments with controlled data transfer protocols are requirements, not options. Second, redundant human control: for every critical steering decision an AI makes, a human operator must be able to intervene, and must periodically exercise operations without the AI.

Third, regular adversarial testing by red teams familiar with AI-specific attack vectors. Classical penetration testing assumes a deterministic target. AI red teaming assumes a probabilistic one, and tests for adversarial inputs, model drift, and data supply chain compromise. These are distinct disciplines. The BSI, as central advisor and early-warning authority in Germany, increasingly expects this differentiation in its audit guidance.

A fourth, structural measure concerns the resource asymmetry between state-level attackers and municipal infrastructure operators. A Stadtwerk with a constrained IT security budget cannot match a state actor with dedicated AI attack capabilities. That gap cannot be closed by the individual operator. It requires collective defense architectures, shared threat intelligence, and mandatory minimum standards that Tactical Management has argued should be modeled on collective defense regimes from military doctrine rather than on conventional B2B service contracting.

How should boards structure AI governance for critical infrastructure to align with NIS2 liability?

Boards should structure AI governance around four mandatory items: a documented AI inventory classified by NIS2 and AI Act risk category; a review process that screens every new AI deployment before go-live; an incident response procedure for AI-specific failures; and a reporting cadence that surfaces AI risk to the supervisory board quarterly, not annually.

The reporting content must be specific. Which AI systems are in operation whose failure would cause material regulatory, reputational, or operational damage? Are the most critical systems protected against data poisoning, model drift, and adversarial attacks? Are we AI Act ready for all systems that could fall under the high-risk category, given that the AI Act phases in between June 2026 and August 2026 for Annex III systems? Do we have the internal competence to answer these questions, or are we fully dependent on external advisors who know our systems less well than we do?

The final governance layer is exercise discipline. A human operator who has never run the plant without the AI will not be able to do so in a crisis. This is the operational face of the cognitive atrophy problem analyzed in ALGORITHMUS by Dr. Raphael Nagel (LL.M.): human-in-the-loop is not only a compliance requirement; it is the competence preservation measure that turns redundant control from a checklist item into a real line of defense.

The convergence of AI in critical infrastructure and NIS2 is the regulatory and operational reality that European infrastructure boards will be judged on for the remainder of this decade. It is not a compliance checklist that can be delegated to an external advisor and revisited at the next audit. It is a continuous governance discipline that demands board-level competence, documented exercise of manual fallback, adversarial red teaming designed for AI systems, and transparent reporting to supervisory bodies. The Sandworm operation of 2015, NotPetya in 2017, and Colonial Pipeline in 2021 are not historical footnotes. They are the baseline against which AI-era attacks must be assessed, and the AI layer makes the attacker’s task cheaper and the defender’s task harder. Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, argues in ALGORITHMUS that boards which treat this as a technical matter will discover, in the moment of a serious incident, that personal liability under NIS2 translates directly into personal consequences. The forward-looking analytical claim is straightforward: within five years, the first wave of NIS2 enforcement actions against management bodies of AI-dependent KRITIS operators will define European corporate security doctrine for a generation. The boards that prepare now will set that doctrine. The boards that wait will be cited in it.

Frequently asked

When did NIS2 enter into force and who is covered?

NIS2 had to be transposed into national law by EU member states by October 2024. It covers essential and important entities across an expanded list of sectors including energy, transport, banking, financial market infrastructure, health, drinking and waste water, digital infrastructure, public administration, space, postal and courier services, waste management, chemicals, food production, and manufacturers of critical products. Medium-sized and large organizations in these sectors fall directly in scope, and many smaller suppliers are pulled in through supply chain security obligations.

Are AI systems specifically mentioned in NIS2?

NIS2 does not regulate AI by name; it is technology-neutral. But its risk management obligations under Article 21, its incident reporting regime, and its supply chain security requirements apply directly to AI components used in critical processes. Combined with the EU AI Act’s high-risk category for critical infrastructure, this produces a dual regime: operators must satisfy NIS2 cybersecurity obligations and AI Act transparency, documentation, and human oversight obligations simultaneously for the same system.

What is data poisoning and why is it a KRITIS-specific concern?

Data poisoning is a cyber attack in which adversaries contaminate the training data of a machine learning system so the model behaves incorrectly in specific operational conditions. In KRITIS, this is particularly dangerous because the malfunction can stay dormant until a trigger event, such as extreme weather or a demand spike. The resulting failure looks like a technical incident, not an attack, which complicates attribution and response and can cause cascading damage across energy, water, or health systems.

Who is personally liable under NIS2 if an AI-driven KRITIS incident occurs?

Members of the management body, Vorstand and Geschäftsführung in German law, bear personal responsibility for approving and overseeing cybersecurity risk management measures. National transpositions vary in exact formulation, but the common core is that liability no longer stops at the CISO or CIO level. Supervisory board members also carry enhanced oversight duties. This is a decisive change from NIS1 and has made AI governance for critical infrastructure a direct board topic rather than an IT delegation.

How does NIS2 interact with the EU AI Act for critical infrastructure operators?

NIS2 and the EU AI Act operate in parallel and compound each other. The AI Act classifies AI systems used in critical infrastructure as high-risk, triggering documentation, transparency, human oversight, and accuracy requirements that phase in between June 2026 and August 2026. NIS2 adds the cybersecurity, incident reporting, and board liability layer. An operator using AI to manage a power grid must therefore meet both regimes for the same system, and must demonstrate that its governance architecture addresses both sets of obligations coherently.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About