EU AI Act Omnibus Pushes High-Risk Deadline to 2027

The EU Parliament and Council agreed on May 7 to delay high-risk AI compliance to December 2027, add a nudifier app ban, and give the machinery sector a permanent carve-out.

EU AI Act Omnibus Pushes High-Risk Deadline to 2027

EU negotiators clinched a deal at 4:30 a.m. on May 7, rewriting the AI Act's compliance calendar and giving companies an extra 16 months to meet the regulation's most demanding requirements. The agreement also permanently carves the machinery sector out of direct AI Act scope and adds a new outright ban on AI-generated intimate imagery without consent.

TL;DR

  • High-risk AI under Annex III faces a new deadline of December 2, 2027 - up from August 2, 2026
  • AI embedded in regulated products (medical devices, toys, lifts) gets until August 2, 2028
  • Synthetic content watermarking is tightened to December 2, 2026 - no delay there
  • A new ban on nudifier apps and AI-produced child sexual abuse material takes effect December 2026
  • Machinery makers get a permanent carve-out; SME exemptions extend to small mid-cap companies

The deal was nearly derailed by pressure from German Chancellor Friedrich Merz, who pushed for broader sectoral carve-outs. The final text held the line on real requirements while conceding on timelines. The original August 2, 2026 compliance date for high-risk AI was seven weeks away when negotiators signed off.

"This agreement significantly supports companies by reducing recurring administrative costs while preserving the essential protections European citizens expect from AI systems that touch their fundamental rights."

  • European Council, May 7, 2026

What Changed

The High-Risk AI Deadlines

The AI Act sorts AI into three tiers. The middle tier - high-risk systems under Annex III - covers AI that makes or supports decisions affecting individuals in employment, education, credit, biometrics, law enforcement, and border management. Those systems now have until December 2, 2027 to certify conformity, register in the EU database, and implement the required human oversight and logging.

AI systems functioning as safety components inside products already governed by EU product safety law - Annex I, covering medical devices, lifts, toys, watercraft, and related sectors - get an additional year on top of that. Their deadline moves to August 2, 2028.

CategoryOld DeadlineNew Deadline
Annex III high-risk AI (employment, credit, biometrics, law enforcement)2 Aug 20262 Dec 2027
Annex I safety components (medical devices, machinery, lifts, toys)2 Aug 20262 Aug 2028
Synthetic content watermarking and disclosureNo delay proposed2 Dec 2026
Nudifier and AI-produced CSAM banNot in original Act2 Dec 2026

The Machinery Carve-Out

The machinery sector is the only Annex I category to receive a full structural exemption. AI systems inside regulated machinery will be governed by health and safety requirements written into delegated acts under the Machinery Regulation rather than falling directly under AI Act compliance obligations. Other Annex I sectors may receive Commission-defined exceptions through implementing acts, but they remain under the AI Act framework until those exceptions are formally established.

The Nudifier Ban

Article 5 - the AI Act's prohibited practices list - now includes AI systems that produce non-consensual intimate imagery and child sexual abuse material. The European Parliament statement cited Grok-created content incidents involving millions of images as the direct trigger. Providers whose systems include effective preventive safeguards are not covered by the prohibition. The compliance deadline matches the watermarking requirement: December 2, 2026.

Adding a provision to Article 5 requires unanimous Council agreement, which is a harder political bar than amending the Annex III list. That it cleared the bar matters. It signals a political consensus that didn't exist when the original AI Act passed.

Who Gets More Time

Companies

Banks, insurers, recruiters, HR platforms, and public agencies using AI for decisions that affect individual rights now have until December 2027 to comply with Annex III requirements. That's 16 months beyond the deadline they had been planning toward.

The Computer and Communications Industry Association (CCIA Europe) welcomed the extension but called it "the bare minimum given persistent delays in the delivery of EU technical standards that are necessary for compliance, but still missing." The standards bodies tasked with producing technical norms for high-risk AI haven't delivered a complete set. Compliance with the original August 2026 deadline was practically impossible for most sectors without those standards.

Several large firms, mostly US-headquartered companies with European legal teams, had already started building compliance programs for August 2026. The delay doesn't undo that investment but removes the competitive disadvantage they'd have had against peers who held off.

European Parliament plenary chamber in Brussels The European Parliament hemicycle in Brussels, where the formal vote on the Omnibus amendments will take place before August 2026. Source: commons.wikimedia.org

Users and Consumers

BEUC, the European Consumer Organisation, argued the deal creates "a less safe digital environment." Their concern isn't the timeline extension itself - it's what the extension signals about enforcement appetite. High-risk AI systems will keep operating under the old regime for another 16 months with no new accountability framework in place.

For individual users, the practical near-term change from this deal is the nudifier ban. Anyone running an AI companion or image generation platform in the EU has less than seven months to either add effective preventive safeguards or pull products that could generate prohibited content.

Competitors Outside the EU

For US and Asian AI companies operating in Europe, the delay is a compliance gift. The US context matters: the White House AI executive order preempting state-level AI rules created a single domestic framework, which may make EU compliance planning easier for US firms than managing dozens of state laws would have been. The two regulatory regimes are now running on different clocks - the US framework is still being written, the EU framework is being repeatedly delayed.

China's domestic AI governance timeline doesn't align with either. Chinese AI companies selling into the EU still face the same December 2027 deadline as everyone else. The carve-outs negotiated by European industry groups don't extend to non-EU firms.

Separately, the five frontier AI labs now under US pre-release review - Google, Microsoft, xAI, OpenAI, and Anthropic - are all operating in the EU market. Their pre-release testing obligations in the US have no formal link to EU high-risk classification, but regulators on both sides of the Atlantic are watching whether voluntary commitments translate into compliance-ready systems.

European Parliament plenary session Brussels hemicycle 2025 The Brussels plenary chamber during session. The Parliament adopted its position on AI Act amendments by 569 votes in favour in March 2026. Source: commons.wikimedia.org

What Happens Next

The provisional agreement is political. Both Parliament and Council must formally adopt the text before it enters into law. Co-legislators are targeting formal adoption before August 2, 2026 - the original compliance deadline that would otherwise kick in for Annex III systems while the amendment is still pending.

If formal adoption slips past August 2, there's a gap period where the original requirements technically apply but the agreed delay hasn't been codified. Legal advisers are already recommending clients document good-faith compliance progress regardless of when the formal text lands. That paper trail matters if enforcement actions arise during any gap.

The deal is being framed as a final extension. Negotiations had previously delayed parts of the AI Act twice before. Modulos, a compliance firm that tracked the process closely, notes that the postponement argument has been exhausted - another delay would undermine the regulation's global standard-setting credibility. Whether that framing holds depends on whether EU technical standards bodies deliver the missing norms on schedule before December 2027.

If Connecticut's SB5 and similar US state AI bills keep advancing, US companies may end up navigating tighter domestic requirements than EU high-risk compliance before Brussels' deadline arrives.


Sources:

Daniel Okafor
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.