The European Commission is negotiating with OpenAI for access to a new AI model designed to identify cybersecurity vulnerabilities — a strategic move that puts Anthropic on the back foot as regulators demand access to frontier AI systems before it’s too late.
The talks, reported by Politico on 11 May, involve OpenAI offering the Commission access to a model capable of finding and exploiting gaps in cyber defenses. Former UK Chancellor George Osborne, now OpenAI’s lead in the discussions, wrote to the Commission this week offering access, saying the company had started “the process of contacting member states” regarding the model.
The pitch is simple: let us help you find the vulnerabilities before the bad guys do.
Why it matters
This isn’t just a product demo. It’s the EU’s first real attempt to get hands-on with frontier AI models for security purposes — and it exposes a growing fracture between regulators and AI companies over who gets to see what, and when.
What is frontier AI model access? Frontier AI model access refers to the ability of governments and regulators to evaluate, test, and potentially monitor advanced AI systems — particularly those with cybersecurity, manipulation, or other dangerous capabilities. Under the EU AI Act, the AI Office gains enforcement powers from August 2026 to demand access to general-purpose AI models operating in the EU market.
The timing is deliberate. The AI Office’s enforcement powers kick in on 2 August 2026, and director Lucilla Sioli made clear at a European Parliament hearing on 6 May that Anthropic will be subject to her office’s jurisdiction. “Once the enforcement powers of the AI Office start in August 2026, we will ensure to receive, if needed, [Mythos] access,” said Commission spokesperson Thomas Regnier.
Anthropic plays hard to get
While OpenAI is courting the Commission, Anthropic is dodging. Last week, Anthropic declined an invitation to meet with European Parliament members and ENISA representatives to discuss Mythos’ cybersecurity risks, reportedly because the invitation came “at short notice.”
Sioli confirmed the AI Office has held discussions with Anthropic — but those talks have not yielded access to Mythos. She noted that the model presents “significant capabilities” and warned that “these leaping capabilities in cyber are actually linked to the fact other capabilities of the model have increased significantly.”
The subtext: Anthropic knows Mythos is powerful enough that regulators should be worried, and they’re playing for time.
Not just about one model
Commission official Despina Spanou pushed back on framing this as purely a Mythos problem. “We need to de-mystify the myth that this discussion is all about Mythos,” she said. “This discussion is about all these models that will bring change to the way we do [cyber] preparedness.”
Former ENISA board member Hans de Vries offered a more measured take: frontier AI creates a “pain situation for a few years, but in the end, it will definitely help us all.” He noted that existing AI security tools are already patching vulnerabilities, even if they’re less sophisticated than Mythos or ChatGPT 5.5.
But “pain for a few years” is cold comfort when those years involve nation-state adversaries with access to increasingly capable AI systems.
What this means for NZ
New Zealand doesn’t have an equivalent to the EU AI Act’s enforcement regime. Our government is still consulting on AI regulation, and there’s no requirement for AI companies to provide model access to NZ authorities. That means:
- NZ agencies can’t evaluate frontier AI cybersecurity risks independently
- We’re reliant on EU and allied intelligence sharing for threat assessment
- The Mythos situation shows what happens when a small market has no leverage over frontier AI companies
If the EU — with 450 million people and the world’s most aggressive AI regulation — struggles to get model access, NZ has essentially zero bargaining power.
🔍 THE BOTTOM LINE
OpenAI is playing chess, Anthropic is playing for time, and the EU is about to get a queen — enforcement powers in August. The frontier AI access debate just moved from theoretical to operational, and every country without regulatory leverage (hi, NZ) is watching from the cheap seats.
❓ Frequently Asked Questions
Q: What is the EU AI Office? The EU AI Office, led by director Lucilla Sioli, is the EU’s dedicated body for overseeing general-purpose AI models. It gains enforcement powers under the AI Act from August 2026, including the ability to demand access to models operating in the EU market.
Q: Why does model access matter? Without access, regulators can’t independently evaluate whether a model poses cybersecurity risks, can generate dangerous content, or could be misused. It’s the difference between taking a company’s word and checking for yourself.
Q: What should NZ do? NZ should coordinate with Five Eyes allies on AI threat assessment, push for model access provisions in any domestic AI regulation, and invest in cybersecurity workforce upskilling — because the skills gap matters more than the regulation gap right now.
Sources
- IAPP: OpenAI grants European Commission access to new model
- Politico: OpenAI EU access talks
- Politico: EU pressure builds on Anthropic over Mythos hacking risks