A server room with red warning lights and a digital lock being picked, dramatic overhead lighting, photojournalistic style
Breaking News

Google Stopped the First AI-Built Zero-Day Exploit — and That Changes Everything

The first confirmed AI-built zero-day exploit was stopped by Google before mass exploitation. The code had telltale AI hallmarks — a hallucinated CVSS score and textbook formatting.

AI cybersecurityzero-day exploitGoogle GTIGMythosAI regulation

The “What If” Era Just Ended

Google’s Threat Intelligence Group (GTIG) has discovered and disrupted the first confirmed zero-day exploit developed with AI assistance. Criminal hackers used AI to find and exploit a vulnerability in an open-source, web-based system administration tool, bypassing two-factor authentication. Google says it disrupted the attack before a planned “mass exploitation event” — meaning this wasn’t a proof of concept. It was a weapon, cocked and ready.

🔍 THE BOTTOM LINE

AI-built cyberattacks are no longer theoretical. The first confirmed AI-developed zero-day was stopped in May 2026 — and the only reason you’re reading about a “near miss” instead of a breach is that Google got there first.


What Happened

GTIG’s report, released May 11, 2026, describes how “prominent cyber crime threat actors” developed a zero-day exploit using AI assistance. The exploit targeted a semantic logic flaw in an open-source web administration tool where the developer had hardcoded a trust assumption in the 2FA system. Classic human mistake — but the AI found it, packaged it, and weaponised it.

The telltale signs were in the Python script: a “hallucinated CVSS score” (AI confidently citing a vulnerability severity rating that doesn’t exist for this issue) and “structured, textbook” formatting — the kind of consistent, polished output you get from LLMs trained on security documentation, not the messy, idiosyncratic code humans write.

Google specifically notes they “do not believe Gemini was used” in developing the exploit. The threat actors also used “persona-driven jailbreaking” — prompts instructing AI to pretend it’s a security expert — and fed entire vulnerability repositories into AI models to find exploitable patterns.

Why This Is Different from AI-Discovered Bugs

This isn’t the same as Google’s own Big Sleep agent or DeepMind’s CodeMender finding vulnerabilities defensively. Those are AI tools built by security researchers for security. This is the flip side: criminals using AI as an offensive weapon — and succeeding enough that Google classified it as the real deal.

Earlier in 2026, a Linux vulnerability (CVE-2026-3141) was discovered with AI assistance, and the Mythos debate has raged for weeks about whether cybersecurity-focused AI models are net positive or net negative. This GTIG report answers that question decisively: both. AI finds bugs for defenders and attackers. The question was always who gets there first.

The Mythos Connection

This exploit lands in the middle of the ongoing Mythos debate. Anthropic’s cybersecurity model has been the subject of intense scrutiny — our coverage of the Mythos vs Firefox saga documented how access to powerful cyber AI models is reshaping the threat landscape. Meanwhile, OpenAI just granted the EU access to GPT-5.5-Cyber while Anthropic drags its feet on Mythos access.

The GTIG report doesn’t mention Mythos specifically, but the implications are clear: if criminals are already using general-purpose AI to build zero-days, specialised cybersecurity AI in the wrong hands would be significantly more dangerous. The defensive/offensive AI arms race just got its first real-world data point.

What This Means for New Zealand

NZ’s cybersecurity gap just got more urgent. The country already struggles with a shortage of cybersecurity professionals — our earlier investigation highlighted how NZ’s small cyber workforce faces asymmetric threats. If AI can now generate zero-day exploits at scale, the gap between attacker capability and defender capacity widens further.

The refreshed AI Blueprint for Aotearoa identifies NZ as “high-use, low-trust” on AI. This exploit is exactly the kind of scenario that drives low trust — and validates the Blueprint’s new focus on social licence. NZ organisations running open-source admin tools (and let’s be honest, most of them are) should be reviewing 2FA implementations and hardening access controls now, not after the next GTIG report.

The Bigger Picture

Google’s report also reveals that hackers are increasingly targeting “the integrated components that grant AI systems their utility, such as autonomous skills and third-party data connectors.” In other words, as companies deploy AI agents with access to business systems, those agents become attack surfaces too. The exploit isn’t just “AI helps hackers” — it’s “AI systems are the targets.”

What is a zero-day exploit? A zero-day is a software vulnerability unknown to the vendor, giving attackers a window where no patch exists. “Zero-day” means the developer has had zero days to fix it. AI-built zero-days are particularly dangerous because AI can discover and weaponise them faster than human security teams can respond.


❓ Frequently Asked Questions

Q: What does this mean for NZ businesses? NZ organisations using open-source web admin tools should audit their 2FA implementations immediately. The exploit targeted a hardcoded trust assumption in authentication — a class of vulnerability that’s common in smaller deployments with limited security review.

Q: Was the AI that built the exploit one of the known cyber models? Google says they “do not believe Gemini was used.” The report doesn’t identify which AI model was used, but the exploit’s quality suggests it wasn’t a specialised cyber model — which makes the finding even more concerning.

Q: What should I do? Patch everything. Review authentication flows. If you’re running open-source admin tools exposed to the internet, treat them as high-risk. And read Google’s full GTIG report — the defensive recommendations are specific and actionable.


🔍 THE BOTTOM LINE

The first confirmed AI-built zero-day exploit wasn’t a research exercise — it was a weapon stopped just before deployment. The era of theoretical AI threats is over. For NZ and every other country with a cybersecurity skills gap, the urgency just shifted from “prepare” to “respond.”


Sources

Sources: Google Threat Intelligence Group, The Verge, BleepingComputer, CyberScoop