Digital security operations centre with multiple monitors showing threat alerts and network maps, dramatic blue and red lighting, photojournalistic style
📰 News Digest

Daily News: Google Stops First AI-Generated Zero-Day — China Drafts AI Agent Rules — US Preps Security Order Without Teeth

Google catches the first AI-developed zero-day exploit. China demands humans keep control of AI agents. US executive order skips mandatory testing. DeepSeek raises $7B. ECB flags AI risk to banks.

Google Stopped the First Zero-Day Exploit Developed With AI

Google’s Threat Intelligence Group (GTIG) detected and disrupted the first confirmed zero-day exploit developed with AI assistance. The exploit, created by “prominent cyber crime threat actors,” targeted an open-source web-based system administration tool and would have bypassed two-factor authentication in a “mass exploitation event.” Google found telltale signs of AI involvement: a “hallucinated CVSS score” and “structured, textbook” formatting consistent with LLM-generated code. The company says it does not believe its own Gemini model was used. The report also warns that hackers are using “persona-driven jailbreaking” — instructing AI to roleplay as security experts — to find vulnerabilities.

🔍 THE BOTTOM LINE: The AI cyber arms race just moved from “AI finds bugs” to “AI writes exploits.” Mythos showed AI can find vulnerabilities. This shows AI can weaponise them. The defence-offence gap that Anthropic warned about isn’t theoretical anymore — it’s already being exploited.


China Publishes Draft Regulations for AI Agents — Humans Must Stay in the Loop

China’s Cyberspace Administration (CAC) published draft regulations governing AI agent behaviour, with a clear requirement: humans must always retain the ability to review and override agent decisions. The draft calls for developers to “clarify the reasonable boundaries and required authority for various decision-making methods” — distinguishing between decisions limited to users, decisions requiring user authorisation, and autonomous decisions by the agent. It identifies potential agent tasks including marking homework, analysing medical images, evaluating employee performance, and even managing “the entire bidding and tendering process.” The draft also calls for mandatory standards in healthcare, transportation, media, and public safety.

🔍 THE BOTTOM LINE: China is the first major jurisdiction to draft regulations specifically for AI agents — not just models. The “human in the loop” requirement is the headline, but the real story is how specific Beijing gets about what agents can and can’t do. Compare this with the US approach of voluntary agreements and executive orders that skip mandatory testing.


US Prepares AI Security Executive Order — Without Mandatory Model Tests

The Trump administration is preparing an executive order directing US agencies to partner with AI companies on network protection from AI-enabled cyberattacks. But the directive stops short of requiring government approval for commercial model releases. It’s the closest the US has come to formal AI oversight, but entirely voluntary — all five major AI labs (Google, Microsoft, xAI, OpenAI, Anthropic) now give the Commerce Department’s CAISI pre-release access, but CAISI has no statutory authority and fewer than 200 staff. The order was catalysed by the Mythos crisis and the threat of a harder-line executive action.

🔍 THE BOTTOM LINE: The US has the world’s most powerful AI labs voluntarily submitting to evaluation — with zero legal obligation and an office smaller than a mid-sized startup. It’s oversight theatre until it isn’t. One administration change, one policy shift, and the whole framework evaporates.


DeepSeek Seeks US$7 Billion in First External Funding Round

Chinese AI startup DeepSeek is seeking up to CNY50 billion (~US$7.35 billion) in its first external funding round, with founder Liang Wenfeng and China’s state semiconductor fund backing the raise. DeepSeek V4.1 is reportedly due in June. The company that proved you could train competitive models cheaply is now raising at scale — potentially pushing its valuation from $45-50 billion. The funding signals DeepSeek’s shift from research project to commercial operation.

🔍 THE BOTTOM LINE: The “efficiency story” was always going to end here. You can’t stay a lean research lab when your models are good enough to compete commercially. DeepSeek’s $7B raise is proof that cost-efficient training doesn’t mean cost-efficient competition — it just means you arrive at the spending race with better unit economics.


ECB Governor Warns Banks on AI Cyber Risk

An European Central Bank official is pressing bankers to stress-test their infrastructure for AI-related cyber risks. The warning follows the ECB’s earlier move to quiz bankers about risks from Anthropic’s Mythos model. ECB supervision is making AI cyber risk a 2026-28 supervisory priority, alongside broader technology governance concerns.

🔍 THE BOTTOM LINE: When the central bank that oversees Europe’s largest financial institutions says “test your AI defences,” it’s not a suggestion. NZ banks should be paying attention — our financial system runs on the same software as everyone else’s.


China Launches Qinglang 2026 Anti-AI-Misuse Campaign

China’s CAC launched its annual Qinglang campaign, this year targeting deepfakes, AI-powered fraud, disinformation, and IP violations. The 2025 campaign took down 3,500+ products and scrubbed 960,000+ pieces of content. This year’s edition arrives the same week the White House accused China of “industrial-scale” AI theft — creating a dual narrative where Beijing cracks down on misuse while Washington accuses it of enabling theft.

🔍 THE BOTTOM LINE: China’s regulatory machine is getting more specific about AI harms each year. The geopolitical irony of launching an anti-misuse campaign while being accused of industrial-scale AI theft is not lost on anyone watching from Wellington.


All 5 Major AI Labs Now Give US Government Pre-Release Access

Google, Microsoft, and xAI have joined OpenAI and Anthropic in giving the US Commerce Department’s Center for AI Standards and Innovation (CAISI) pre-release access to frontier models. CAISI has completed 40+ evaluations. The arrangement is entirely voluntary, with no statutory basis and no power to block releases. Catalysed by the Mythos crisis and executive order threats.

🔍 THE BOTTOM LINE: Five labs, zero legal requirements, fewer than 200 staff. This is AI oversight held together by social pressure and crisis momentum. It works until it doesn’t.


❓ Frequently Asked Questions

Q: Does the AI-generated zero-day mean AI is now being used to attack NZ systems? Not yet detected, but NZ runs the same open-source tools and platforms. The Google report found the exploit before it was deployed — but the next one might not be caught. Our CERT should be tracking this.

Q: What do China’s AI agent rules mean for NZ? NZ often follows international regulatory patterns. China’s draft is the first to specifically address AI agents. The “human in the loop” principle is likely to appear in any NZ framework too — it’s becoming the global baseline.

Q: Is the US executive order meaningful? Symbolically, yes — it signals the administration takes AI security seriously. Practically, it’s a handshake agreement with no enforcement mechanism. If the five labs decided to stop cooperating tomorrow, there’s nothing the government could do.


🔍 THE BOTTOM LINE

The AI cybersecurity story has entered a new chapter: it’s no longer just about AI finding vulnerabilities — it’s about AI creating them. Google’s discovery of the first AI-generated zero-day exploit confirms what Anthropic warned about. China responds with the first agent-specific regulations. The US responds with a voluntary executive order. The pattern is clear: regulation follows capability, and capability is accelerating faster than any parliament or congress can draft.


📰 SOURCES