Dr. Raphael Nagel (LL.M.) on EU AI Act compliance for private equity portfolios
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Technology · Regulation

EU AI Act and Private Equity: What Every Portfolio Company Must Know Before 2026

The EU AI Act is now law. Most PE-backed portfolio companies don't know whether they're high-risk operators, limited-risk deployers, or somewhere in between. The compliance window is closing — and board liability is already live.

## The Law Is Already In Force There is a gap between what the EU AI Act requires and what most PE-backed portfolio companies have done about it. That gap is no longer defensible. The EU AI Act entered into force on 1 August 2024. The phase-in schedule is not a future concern — it is a present obligation. The prohibitions on unacceptable-risk AI systems became enforceable on 2 February 2025. The governance and transparency requirements for general-purpose AI models came into effect in August 2025. The high-risk AI requirements under Annex III — the category most likely to affect PE portfolio companies in industrial, HR, and financial services contexts — become fully applicable in August 2026. That timeline means that portfolio companies with high-risk AI exposure have a narrow window to complete conformity assessments, implement documentation obligations, and establish ongoing monitoring. The compliance infrastructure required under the Act is not a checkbox exercise. It requires organizational redesign, contractual review across the AI supply chain, and technical changes to existing AI systems. For many portfolio companies that have been treating this as a 2026 problem, it is already a 2025 problem. ## The Risk Classification Architecture The EU AI Act establishes a four-tier risk classification system. The tier that applies to a given AI system determines the compliance obligations that flow from it. **Unacceptable risk** covers AI systems that the Act prohibits entirely: social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), subliminal manipulation, and AI systems that exploit vulnerable populations. No PE portfolio company operating legally should be deploying systems in this category. If your due diligence surfaces one, the acquisition conversation changes entirely. **High risk** is the category that requires the most attention from PE investors and portfolio company boards. Annex III defines eight sectors in which AI systems are presumptively high-risk: 1. Biometric identification and categorisation 2. Management and operation of critical infrastructure 3. Education and vocational training 4. Employment, worker management, and access to self-employment 5. Access to essential services including credit and insurance 6. Law enforcement 7. Migration, asylum, and border control management 8. Administration of justice and democratic processes The breadth of category 4 — employment and worker management — and category 5 — access to credit and insurance — means that a significant proportion of AI tools already deployed across PE portfolio companies likely fall within high-risk scope. An AI system that assists with CV screening, performance evaluation, promotion recommendations, or workforce planning is a high-risk system under the Act. A credit scoring model, an insurance underwriting tool, a loan eligibility algorithm — all high-risk by statutory definition. **Limited risk** captures AI systems that interact directly with humans without being high-risk — chatbots, deepfakes, emotion recognition in specified contexts. These require transparency obligations: users must be informed they are interacting with an AI system. This sounds simple but requires systematic review of customer-facing AI deployments. **Minimal risk** covers the vast majority of AI applications — spam filters, product recommendation engines, fraud detection operating in low-stakes contexts. The Act imposes no mandatory requirements on these systems, though voluntary codes of conduct are encouraged. The classification question — which tier applies? — is the first and most consequential question a portfolio company must answer. It cannot be delegated to IT. It requires legal analysis, operational mapping, and a board-level decision. ## Annex III in Practice: Which Portfolio Companies Are Affected Let me be concrete about which common PE portfolio company profiles trigger high-risk status under Annex III. **Industrial and manufacturing.** A factory deploying AI for autonomous quality control in safety-critical components (aerospace, automotive, medical devices) may trigger the critical infrastructure or safety system provisions. The key question is whether the AI system performs a safety function that, if it fails, could cause harm to persons or property. **HR technology and workforce management.** Any portfolio company — whether a SaaS HR platform or an industrial holding using AI-assisted hiring — that deploys algorithmic tools for CV screening, interview scoring, attrition prediction, or performance management is operating a high-risk AI system under category 4. This is one of the most commonly underestimated exposure points. The tools are widespread, often acquired through SaaS vendors who have not yet updated their own compliance documentation. **Financial services.** PE-backed lending platforms, insurance distribution businesses, and fintech companies using automated credit scoring or underwriting models are operating high-risk AI systems under category 5. This overlaps with existing financial regulation but adds a layer: GDPR compliance does not satisfy EU AI Act requirements. A separate conformity assessment is required. **Healthcare technology.** AI systems used in medical diagnosis support, patient triage, or treatment recommendation trigger both the high-risk provisions of the AI Act and the overlapping Medical Device Regulation. The interaction between these regimes requires specialist legal analysis. **Critical infrastructure operators.** PE portfolio companies that operate energy systems, water management, digital infrastructure, or transportation networks must assess whether their AI-assisted control and monitoring systems meet the Act's requirements for high-risk deployments in critical infrastructure. ## What High-Risk Status Requires A high-risk AI system under the Act must satisfy a defined set of technical and organizational requirements before it can be placed on the market or put into service in the EU. The core requirements are: a quality management system covering the entire AI lifecycle; comprehensive technical documentation; a conformity assessment (either self-assessed against harmonized standards or third-party certified, depending on sector); registration in the EU AI database maintained by the Commission; ongoing post-market monitoring with incident reporting obligations; and human oversight mechanisms built into the system's operation. The documentation requirement alone is substantially more demanding than most PE portfolio companies have historically maintained for technology systems. The Act requires documentation of the intended purpose, the training data used, the validation methods applied, the performance metrics achieved, and the known limitations of the system. This documentation must be kept current and available for inspection by market surveillance authorities. The conformity assessment requirement creates a supply chain obligation: a portfolio company deploying a third-party AI system in a high-risk use case cannot simply rely on the vendor's compliance. It must satisfy itself that the system meets the Act's requirements for its specific deployment context. Many SaaS AI vendors are not yet prepared to provide the documentation their customers need to satisfy this requirement. ## Timeline: What Is Already Enforceable The phased enforcement timeline requires precise understanding. **2 February 2025 — Unacceptable risk prohibitions enforceable.** Any prohibited AI practice from that date is subject to fines of up to €35 million or 7% of global annual turnover. **2 August 2025 — General-purpose AI model provisions enforceable.** This affects portfolio companies that develop or use foundational AI models. Providers of GPAI models with systemic risk — estimated at a threshold of 10^25 FLOPS for training — face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. **2 August 2026 — High-risk AI obligations under Annex III enforceable.** Conformity assessments, technical documentation, registration, and human oversight requirements must be in place for all high-risk AI systems. New deployments must comply from this date; existing systems have until August 2027 for full conformity. **2 August 2027 — Transitional period for existing high-risk AI systems ends.** Legacy systems cannot continue operating non-compliant after this date. The consequence of this timeline is that PE investors assessing portfolio companies today need to include AI Act compliance status as a live diligence item. The question is no longer hypothetical: a portfolio company with an uncertified high-risk AI system and a non-existent quality management system has a quantifiable regulatory liability. That liability needs to appear in deal valuations, in management warranties, and in post-acquisition 100-day plans. ## Board Liability: The Personal Director Dimension The EU AI Act creates direct personal liability for directors in specific circumstances that are not yet fully appreciated by most PE-backed boards. The Act does not establish individual criminal liability in the way that data protection law sometimes does. But it creates organizational obligations that require board-level oversight and decision-making. A board that has not reviewed the company's AI systems for Act compliance, has not commissioned a conformity assessment for high-risk systems, and has not implemented a quality management system for AI governance may be exposed to the argument that it failed in its duty of care — both under the AI Act itself and under applicable national company law. More concretely: the Act's whistleblower protection provisions create a channel through which employees can report AI Act non-compliance to national supervisory authorities. An employee who is subject to a non-compliant algorithmic management system — one that, say, makes promotion decisions without the transparency and human oversight the Act requires — can initiate a supervisory investigation. The investigation will examine whether the board had adequate oversight and control of the AI system. If it did not, the board carries the exposure. For PE portfolio company directors sitting on multiple portfolio company boards, the governance implication is clear: an AI Act compliance checklist is now board-level agenda material, not IT department material. ## The NIS-2 Overlap for Critical Infrastructure Portfolios PE funds with portfolio exposure in critical infrastructure — energy, water, digital infrastructure, transport, health — face a compounding regulatory environment: the EU AI Act operating alongside the NIS-2 Directive, which became enforceable in October 2024. NIS-2 imposes cybersecurity risk management obligations and incident reporting requirements on operators of essential services. The overlap with the AI Act creates a dual obligation: critical infrastructure AI systems must simultaneously satisfy AI Act high-risk compliance requirements (conformity assessment, documentation, human oversight) and NIS-2 cybersecurity obligations (risk assessments, security measures, incident reporting within 24 hours of detection). The two regimes are not fully harmonized. A portfolio company that designs its AI governance program around one without reference to the other will find gaps. Critically, the supply chain provisions of NIS-2 — which require operators to assess and manage cybersecurity risks from technology suppliers and vendors — overlap with the AI Act's requirements for high-risk AI supply chain documentation. For PE funds managing critical infrastructure assets, the compliance program needs to be designed around the intersection of both regimes, not compliance with each in isolation. ## Practical Checklist for Portfolio Company GCs For general counsel at PE-backed companies, the following items require immediate action: 1. **Complete an AI systems inventory.** Map every AI system deployed across the organization against the Annex III use case list. This includes third-party SaaS tools that contain AI components, not just proprietary AI development. Most companies discover more high-risk exposure here than their initial assessment suggested. 2. **Classify each system.** For each system identified, determine the applicable tier (unacceptable, high, limited, minimal). Document the classification rationale. This documentation will be required if regulators investigate. 3. **Assess conformity for high-risk systems.** Determine whether existing high-risk systems can satisfy the Act's requirements with modifications, or whether they require replacement. Engage vendors to provide the documentation needed for supply-chain conformity assessment. 4. **Implement or commission a quality management system.** The QMS required by the Act covers AI lifecycle management from design through decommissioning. This is not the same as ISO 9001 or existing quality systems — it has AI-specific components that require adaptation. 5. **Establish human oversight mechanisms.** For high-risk AI systems, document the human oversight points built into operation. Where oversight is inadequate, redesign the operating procedure before August 2026. 6. **Review management warranties and D&O coverage.** In the context of PE transactions, ensure that management warranties in SPA documentation include AI Act compliance representations. Review whether existing D&O insurance covers regulatory investigations under the AI Act. 7. **Establish an ongoing monitoring and incident reporting protocol.** Post-deployment, high-risk AI systems require monitoring for performance degradation and bias drift, with incident reporting obligations. This cannot be a one-time exercise. The compliance window is not closed. But the lead time for building the quality management infrastructure, completing conformity assessments, and remediating non-compliant systems is measured in months, not weeks. Portfolio companies that begin this process in the first half of 2026 will be in the minority. They will also be the ones that avoid the regulatory exposure that will become visible when national supervisory authorities begin active enforcement. The EU AI Act is not GDPR with different vocabulary. It is a different and more technically demanding regime. Treating it as an extension of existing data protection compliance will produce systematic under-investment in the areas that matter. The boards and legal teams that understand this distinction, and act on it before the 2026 enforcement date, will have built something durable. The ones that do not will discover the cost of delay in the worst possible context.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About