The EU AI Act has been law since August 2024, but for most companies, it’s been more theoretical than operational. That ends in August 2026.
Phase 2 enforcement brings high-risk AI systems in employment, education, and critical infrastructure under strict compliance requirements — and this isn’t a grace period. The general prohibition rules have been active since February 2026. Phase 2 adds the real teeth: mandatory risk assessments, transparency obligations, and human oversight requirements for any AI system touching hiring, worker management, student evaluation, or educational tools.
What Phase 2 Actually Requires
If your AI system is used for any of the following in the EU market, you now need to comply:
Employment AI: Resume screening, interview analysis, performance monitoring, promotion decisions, workforce allocation, termination recommendations. If an AI influences someone’s job, it’s high-risk.
Education AI: Student evaluation, admissions decisions, learning assessment, behavioural monitoring in schools. The AI that grades your child’s exam or recommends whether they get into university? High-risk.
Critical infrastructure: AI in energy, water, transport, and other essential services. Already obvious, but now with formal compliance requirements.
Mandatory obligations include:
- Risk assessment and mitigation before deployment
- Human oversight mechanisms (not optional — must be built in)
- Transparency and documentation requirements
- Ongoing monitoring and incident reporting
- Conformity assessments for high-risk systems
Who This Actually Affects
This isn’t just a European story. The Brussels Effect — where EU regulations effectively become global standards because nobody wants to build separate products for one market — means this hits every major AI company.
If you’re a US-based HR tech company selling into Europe, you’re in scope. If you’re an edtech platform used by EU schools, you’re in scope. If you’re a Kiwi company using AI for hiring and you have any EU employees or customers, you’re in scope.
This is the same dynamic we’ve seen with GDPR. The regulation is European, but the impact is global. AI compliance isn’t optional anymore — it’s a cost of doing business.
How This Differs From Phase 1
Phase 1 (February 2026) focused on outright prohibitions — things nobody should be doing with AI, like social scoring, real-time biometric surveillance, and manipulation of vulnerable groups. Important, but mostly targeting abuses that were already controversial.
Phase 2 is about the normal uses of AI — the resume screeners, the student evaluators, the hiring tools — and saying: these are legitimate, but they need oversight, transparency, and accountability. That’s a much bigger compliance surface.
The NZ Angle
New Zealand doesn’t have an EU-style AI Act, but NZ companies selling into the EU — or using AI tools built by companies that sell into the EU — will feel this indirectly. If your HR platform adds EU compliance features, those features will likely become the default globally. The global AI workforce shift means regulation in one market shapes product design everywhere.
For NZ policymakers, the EU Act is becoming the de facto benchmark. When we eventually draft our own AI regulation framework, the question won’t be “should we regulate?” but “how much of the EU framework do we adopt?”
🔍 The Bottom Line
August 2026 isn’t a warning shot — it’s a deadline. The EU AI Act Phase 2 transforms AI compliance from a nice-to-have into a legal requirement for any AI system touching employment, education, or critical infrastructure in the EU. The Brussels Effect means this ripples globally. If you build or buy AI, you need to know what Phase 2 requires. The grace period is over.