The Numbers Are Staggering
In April 2026, Firefox shipped 423 bug fixes. In April 2025: 31.
That’s a 1,264% increase. And almost all of it is thanks to one thing: Anthropic’s Mythos.
Mozilla published the full technical breakdown on Thursday, revealing that Anthropic’s cybersecurity-focused AI model had unearthed 271 confirmed vulnerabilities in Firefox — including multiple sandbox exploits and a bug that had lain dormant in the code for 15 years.
Let that sink in. A bug that survived a decade and a half of human code review, automated testing, and millions of users’ collective experience was found by an AI in a few hours.
🔬 How Mythos Actually Works
This isn’t a generic code assistant like Copilot scanning for known patterns. Mythos operates differently:
- Static analysis at scale — Scans entire codebases for vulnerability patterns, not just syntax
- Exploit path validation — When it finds a potential bug, it writes a proof-of-concept exploit to confirm the vulnerability is real
- Sandbox attack simulation — For security-critical components, it tests whether an exploit can actually escape restricted environments
- Self-filtering — The model evaluates its own findings and discards false positives before presenting them to human reviewers
This final step is crucial. Previous AI security tools flooded teams with false positives. Mozilla’s researchers say the quality threshold has shifted dramatically: “It is difficult to overstate how much this dynamic changed for us over a few short months.”
Brian Grinstead, distinguished engineer at Mozilla, told TechCrunch: “These things are actually just suddenly very good.”
🎯 The Sandbox Breakthrough
The most impressive finding: Mythos found vulnerabilities in Firefox’s sandbox system — the most aggressively secured part of the browser. To find sandbox bugs, the model has to:
- Write a compromised patch for Firefox
- Attack the most secure part of the software with that patch deployed
- Demonstrate the exploit chain end-to-end
Mozilla’s bug bounty pays up to $20,000 for sandbox vulnerabilities — the highest reward available. Despite that, Grinstead says “we don’t get them at the volume we are able to find with this technique.”
That means AI is now outperforming human penetration testers on the hardest, highest-reward cybersecurity tasks.
⚠️ The Other Side of the Coin
Here’s the part that should keep CISOs awake at night.
Anthropic has been scrupulous about responsible disclosure. But the same capabilities that let Mythos find 271 Firefox bugs can also be used offensively. If a well-resourced adversary has access to similar AI models — and there’s no reason to think they don’t — then every software project on the internet is facing a new class of threat.
Mozilla’s researchers note that most of the bugs Mythos discovered globally likely haven’t been patched yet. The gap between discovery and remediation is widening.
WIRED’s coverage put it bluntly: “Amid a raging debate over the impact AI models will have on cybersecurity, Mozilla’s announcement settles one question: the defensive potential is real. The offensive potential is, presumably, just as real.”
📈 What This Means for Software Everywhere
The implications cascade across the entire software industry:
- Open-source projects win. Many of these bugs were found in Firefox because Mozilla opened their codebase to Mythos analysis. Other open-source projects can benefit the same way. Proprietary software with smaller security teams? Not so much.
- Bug bounty programs change. If AI can find $20,000 sandbox bugs faster than humans, what happens to the economics of crowdsourced security testing? Bug bounty hunters may need to pivot to AI-assisted or AI-augmented approaches.
- CI/CD pipelines get security AI. Within 12 months, I’d expect every serious software project to have an AI vulnerability scanner as a mandatory CI gate — just like linters and unit tests today.
- Zero-day market disruption. The market for undisclosed vulnerabilities could shift significantly if AI models make finding them cheaper and faster. Whether that helps defenders or attackers depends on who gets there first.
❓ Yet: Humans Still Fix the Bugs
One critical caveat: Mozilla isn’t using AI to patch the bugs yet.
Grinstead was explicit: “For the bugs we’re talking about in this post, every single one is one engineer writing a patch and one engineer reviewing it. We have not found it to be automatable.”
So the current state of play is:
- ✅ AI finds vulnerabilities faster and better than humans
- ❌ AI still can’t write reliable patches for complex security bugs
That’s good news for security engineers — for now. But given the pace of improvement, I wouldn’t bet on that remaining true for long.
🌏 NZ Angle: Small Country, Big Attack Surface
New Zealand’s cybersecurity position is already precarious. We have a small talent pool, limited bug bounty programs, and critical infrastructure that often runs on older software.
For small economies, AI security auditing could be transformative:
- A single Mythos-class system could audit the entire country’s critical software
- NZ-based security consultancies could offer AI-augmented audits to local businesses
- The Government’s CERT NZ could deploy AI vulnerability scanning across government systems
But the flip side is equally real: NZ organisations that aren’t using AI security tools will be increasingly exposed. The attacker-defender gap just got wider, and the penalty for not adopting AI defence is rising daily.
🤔 My Take: The Cybersecurity Tipping Point
I’ve been sceptical about AI’s practical cybersecurity value for a while. The pattern was always: impressive demo, terrible real-world results, flood of false positives.
Mythos changes that calculus.
The 423-to-31 bug fix ratio tells the story. Not in hype, not in promises, but in production patches shipped to hundreds of millions of users. This isn’t a lab result. It’s live, deployed, and making Firefox measurably more secure.
The question now isn’t “will AI change cybersecurity?” It’s “how fast will the laggards get exploited?”
There’s a darkly humorous parallel here. For years, security teams have been understaffed and overwhelmed. AI was supposed to help. But AI’s main effect seems to be making the offensive side dramatically more powerful — and only then giving defenders tools that might barely keep up.
Mozilla caught 271 bugs. But those were the ones Mythos found before bad actors did. How many were already being exploited? We’ll never know.
🔍 THE BOTTOM LINE: Mythos finding 271 Firefox vulnerabilities — including a 15-year-old sandbox exploit — marks the moment AI security auditing became real, not theoretical. The defensive tools are suddenly very good, which means the offensive tools are probably even better. Patch your software. Get AI scanning. The gap between those who do and those who don’t is about to become a chasm.