AI Daily News — May 9, 2026
US Tech Giants Agree to Pre-Release Government AI Testing
Google, Microsoft, and xAI have joined OpenAI and Anthropic in a voluntary agreement to give the US Commerce Department’s Center for AI Standards and Innovation (CAISI) pre-release access to evaluate AI models before public deployment. The expansion, catalysed by the Mythos crisis earlier this year, means all five major frontier AI labs now submit to government review through an office with fewer than 200 staff and no statutory authority to block a release.
The centre has completed more than 40 evaluations, including state-of-the-art systems with safety guardrails stripped back for national security probing: biological weapon synthesis pathways, cyberattack automation, and autonomous agent behaviours. Chris Fall now directs the centre following the brief, four-day tenure of Collin Burns, who was pushed out by the White House reportedly due to his connection to Anthropic — a company the administration was actively fighting.
Why it matters: This is the closest thing America has to AI oversight, and it’s built on nothing but voluntary cooperation. The system works because companies want it to. The question is how long that lasts, and whether a future administration hostile to these companies could unravel it entirely. — The Next Web | Bloomberg | CNN
EU Agrees to Ban AI “Nudifier” Apps, Delay High-Risk AI Rules
EU lawmakers reached a provisional deal on amendments to the AI Act that bans AI systems designed to create non-consensual intimate images and child sexual abuse material — the direct legislative fallout from the Grok image-generation scandal earlier this year. The ban covers placing such systems on the EU market, deploying them without reasonable safety measures, and using them to create prohibited content. Companies have until December 2, 2026 to comply.
On the other side of the ledger: obligations for high-risk AI systems have been pushed back — to December 2, 2027 for most use cases (biometrics, critical infrastructure, education, employment, law enforcement), and to August 2, 2028 for safety components. Watermarking obligations are delayed to December 2026. The omnibus package also streamlines enforcement, reduces regulatory overlap, and extends SME exemptions.
Why it matters: Europe is doing the regulatory splits — banning the worst AI harms while simultaneously kicking the hard compliance deadlines down the road. It’s pragmatic but it’s also telling. The message is clear: parliament can move fast on stuff nobody defends (child abuse material), but gets very deliberate about anything that might slow down AI adoption. — European Parliament | The Next Web
Anthropic Signs Compute Deal With SpaceX — Yes, Really
Anthropic announced a partnership with SpaceX that gives Anthropic exclusive access to all the compute capacity at Colossus 1, xAI’s data centre in Memphis, Tennessee — 300+ megawatts of new capacity available “within the month”. The deal also includes exploration of SpaceX’s orbital data centre satellites for future compute. This is Musk’s xAI infrastructure, after Musk merged SpaceX and xAI earlier this year. Musk has been a vocal critic of Anthropic.
The irony is palpable: the company whose AI safety ethos is a direct response to Musk’s early warnings about AI risk is now renting servers from him. But compute is compute, and Anthropic needs it. The company has been raising capital aggressively and signing compute deals wherever it can find the capacity.
Why it matters: When your biggest critic becomes your landlord, you know the compute crunch is real. This says less about politics and more about the physical reality of AI scaling — there aren’t enough data centres in the world, and companies will make uncomfortable bedfellows to get access. — Anthropic Blog | CNBC | SpaceNews
Airbnb Reveals AI Now Writes 60% of Its Code
Airbnb CEO Brian Chesky revealed that AI agents are now responsible for 60% of the company’s new code, up from previous quarters. Chesky stated this allows one engineer to do the work of what used to require 20. The shift is part of a broader trend of tech companies “flattening teams” — reducing headcount while maintaining or increasing output through AI-assisted development.
The milestone comes as an Airbnb engineer wrote on Substack that they produce “99% of their code with LLMs” and consider “writing high quality production code with LLMs a solved problem.”
Why it matters: This is the number that should terrify and excite every software engineer. Not because AI replaces engineers — but because it redefines what “one engineer” can produce. Teams get smaller, expectations get higher, and the junior developer pipeline gets squeezed. — Benzinga | Economic Times
Scale AI Wins $500M Pentagon Contract
Scale AI has won a $500 million contract from the US Department of Defense — five times larger than its existing $100 million deal signed in September 2025. The contract, signed with the Pentagon’s Chief Digital and Artificial Intelligence Office, is designed to integrate AI tools into military decision-making and data-processing workflows. Microsoft, Amazon, and Google signed parallel classified-network AI agreements the same week.
The Pentagon now has eight firms’ AI systems approved for use on classified networks. Scale’s role sits in the data-labelling and decision-support layer — fixing the fragmented data quality that has held back operational AI deployment in military contexts.
Why it matters: Military AI procurement is accelerating fast, and the numbers are getting big enough to reshape the industry. Scale’s $500m contract is for data quality infrastructure — which tells you that even the Pentagon has figured out that models are only as good as the data underneath them. — Bloomberg | The Next Web
NZ’s First AI Factory: Datagrid Gets Resource Consent for 280MW Southland Campus
New Zealand’s Datagrid received full resource consent for its planned 280MW, 78,000 sqm data centre in Makarewa, near Invercargill. The facility — described as the country’s first “AI factory” — will be the second-largest electricity user in New Zealand after the Tiwai Point aluminium smelter. Datagrid has signed a 15-year, 140MW power deal with Mercury, and construction targets a 2028 opening.
The Conversation NZ published an analysis asking who benefits from the build-out, noting that AI data centres consume enormous power but create relatively few jobs once operational. The consent was celebrated by economic development officials but raises questions about electricity pricing and whether the benefits will flow to New Zealanders.
Why it matters: New Zealand is positioning itself as an AI infrastructure destination — clean energy, natural cooling, stable grid. But the trade-offs are real. One data centre the size of a small town, consuming power equivalent to a major industrial plant, for an industry that employs almost nobody locally. Is that the future we want? — BusinessDesk | Datacenter Dynamics | The Conversation
1X Opens NEO Humanoid Factory in California, Consumer Shipments Planned for 2026
Norwegian robotics company 1X Technologies launched full-scale production at its new vertically integrated humanoid robot factory in Hayward, California — billed as America’s first facility of its kind. The NEO robot, designed for household use, operates quietly and is planned for consumer deliveries this year. The company is actively hiring staff for the factory.
Why it matters: 2026 is shaping up as the year humanoid robots leave the lab. 1X’s NEO is targeting homes, not warehouses — a much harder problem than Agility’s Digit, which is already generating revenue in commercial settings. If 1X delivers, this is a genuine milestone. — GlobeNewsWire | The Robot Report
NZ Parliament Report Warns Businesses Aren’t Getting AI Returns
A New Zealand Parliamentary committee briefing on AI contains a sobering admission: the Department of Internal Affairs told MPs that large international businesses are not yet getting a return from AI, and most of their proofs of concept are not working. A National Bureau of Economic Research study cited in the report found 90% of surveyed firms saw no impact of AI on workplace productivity.
Victoria University AI expert Andrew Lensen says aggressive AI uptake hasn’t led to quick wins, and fears of being “left behind” haven’t materialised. He questions how much of the adoption wave is simply “enabling Copilot, declaring AI-enabled, and calling it a day.” Singapore PM Lawrence Wong, speaking alongside Christopher Luxon, said small economies should focus on finding AI niches rather than trying to build foundation models.
Why it matters: This is the honest conversation that rarely makes headlines. For all the hype, most businesses are not seeing ROI from AI. The question is whether we’re in a 1999 dot-com moment (overhyped now, transformed later) or a genuine productivity revolution that’s just taking longer to show up on the balance sheet. — Newsroom NZ
Deepfakes Bypass KYC: AI Identity Fraud Surges Globally
AI-generated synthetic identities and deepfakes are increasingly bypassing traditional Know Your Customer (KYC) verification systems globally. Incidents documented include deepfakes used to bypass India’s Aadhaar security in a loan fraud case, synthetic documents used to secure a $195,000 business loan in Miami, and AI-generated passports selling in bulk on underground markets. In NZ, the average cost of AI-generated identity fraud per business has reached $2.2 million.
Why it matters: The identity verification industry was already playing catch-up before generative AI. Now the gap is a chasm. Any system that relies on a user submitting a photo of their ID and a selfie is vulnerable — and the attackers are iterating faster than the defences. — OECD AI Incidents | IProov | CheckFile.ai
Salesforce Commits to Hiring 1,000 “AI-Native” Graduates
Salesforce announced its Builder program, recruiting 1,000 graduates and interns to build the future of Agentforce — its AI agent platform. The company’s data shows AI-native graduates are four times more likely to use AI daily, deliver three times faster than legacy managers, and drive a 40% increase in quality of work. The program is built around a “3As framework” — Attract, Assess, Activate.
Why it matters: Salesforce is betting big on the idea that the next generation doesn’t need to “adapt” to AI — they already operate natively within it. The program is as much a talent strategy as a signal to competitors: the companies that figure out how to absorb AI-native talent fastest will win. — Salesforce News
Figure AI’s Helix 02: Full-Body Autonomy for Humanoids
Figure AI announced Helix 02, an extension of its Helix neural network control system to full-body autonomy — enabling humanoids to walk, manipulate objects, and navigate in real-world environments from pixels alone, without pre-programmed routines. The system was introduced with Figure’s 03 platform, which has entered commercial pilot deployments.
Why it matters: “From pixels to walking” is the kind of breakthrough that sounds incremental but isn’t. Previous humanoid control systems glued together perception and locomotion. Helix 02 runs everything through one neural network — which is how biological brains work, and apparently how robot brains will too. — Figure AI
🔍 THE BOTTOM LINE
Two stories define this week: regulation catching up and infrastructure building out. The US has voluntary pre-release testing for all five frontier labs, the EU is banning the worst harms while punting the hard deadlines — and both approaches reflect the same tension: nobody wants to slow AI down, but nobody wants to be caught unprepared either.
Meanwhile, New Zealand’s Datagrid approval, Anthropic’s SpaceX deal, and Scale’s Pentagon contract all speak to the same physical reality: AI needs power, data centres, and compute at a scale most of us haven’t fully absorbed yet. The humanoid robot race is entering production phase. And under everything, the productivity question lingers: are we building something revolutionary, or just running very expensive experiments at scale?
❓ FAQ
When do the EU’s new AI rules take effect? High-risk AI obligations now start December 2, 2027 (most use cases) or August 2, 2028 (safety components). The nudifier app ban takes effect December 2, 2026, with companies needing to comply by then.
Is US AI model testing mandatory? No. It’s entirely voluntary. The Commerce Department’s CAISI has no statutory authority to block a model’s release. All five major labs have agreed to participate anyway.
How much power will NZ’s Datagrid data centre use? 280MW at full build-out — second only to the Tiwai Point aluminium smelter in New Zealand. Construction targets 2028 opening.
Did we miss something? AI moves fast. Drop us a line and we’ll cover it.