White House with AI neural network overlay and security checkpoint, dramatic overhead shot, documentary style
AI & Singularity

Trump's Beautiful Baby Just Got a Babysitter: White House Considers Vetting AI Models Before Release

From hands-off to check-first in under a year.

AI regulationTrump administrationAI safetyAnthropic Mythoscybersecurity

Remember when Trump said AI was a “beautiful baby” that “we can’t stop with foolish rules”? Yeah, about that.

The White House is now actively discussing an executive order that would create a government review process for new AI models before they’re released to the public. It’s the policy equivalent of letting your teenager throw house parties for a year, then suddenly installing a lock on the liquor cabinet.

What’s on the table

According to US officials briefed on the deliberations, the administration is considering:

  • An AI working group combining tech executives and government officials to design oversight procedures
  • A formal review process for new AI models — potentially similar to Britain’s approach, where multiple government bodies assess models against safety standards
  • Involvement from the NSA, the Office of the National Cyber Director, and the Director of National Intelligence in reviewing models

Crucially, some officials are pushing for a system that gives the government first access to AI models without blocking their release entirely. Think of it as a preview screening, not a ban.

What triggered the flip

The pivot started last month when Anthropic announced Mythos — an AI model so powerful at finding security vulnerabilities that Anthropic itself refused to release it publicly. The company called it a potential cybersecurity “reckoning.”

That one word did more to shift Washington’s posture than a thousand policy papers. The White House wants to avoid political fallout if an AI-enabled cyberattack hits on their watch. They’re also eyeing whether models like Mythos could give the Pentagon useful cyber-capabilities — a dual-use dynamic that makes regulation and military interest impossible to untangle.

There’s a backstory here too. Anthropic and the Pentagon have been locked in a bitter dispute over a $200 million contract and how the military should use AI in warfare. The Pentagon cut off government use of Anthropic’s technology in March. Anthropic sued. Yet Anthropic’s AI is still running in the military’s Maven system — helping analyse intelligence and suggest airstrike targets in the Iran conflict. The NSA is also using Mythos to assess vulnerabilities in US government software. It’s messy.

The irony writes itself

This is the same administration where Vice President JD Vance stood in Paris last year and warned that “excessive regulation of the AI sector could kill a transformative industry.” The same White House that rolled back Biden-era safety evaluation requirements as one of its first acts. The same David Sacks — the former AI czar who championed deregulation — who left in March, just before this reversal started taking shape.

Now Susie Wiles and Treasury Secretary Scott Bessent have stepped in to fill the void. They held a meeting with Anthropic CEO Dario Amodei last month — both sides called it “productive” — and they’re telling people they plan a bigger hand in AI policy going forward.

Tech executives are confused. Some argue government oversight will slow US innovation against China. Others want guardrails. The companies don’t even agree among themselves. As former Trump AI adviser Dean Ball put it: “The technology is moving extremely fast, and there are few formal procedures, but they also don’t want to overregulate. It’s a tricky balance.”

Why NZ should pay attention

New Zealand has no equivalent of the proposed US vetting regime. Our AI regulation is still largely voluntary guidelines. But if the US implements pre-release review, two things happen:

  1. NZ companies selling AI into the US will need to navigate compliance with whatever review process emerges
  2. Global norms shift — when the US moves from laissez-faire to oversight, other countries follow. The EU already has the AI Act. If America joins the regulation club, the pressure on smaller nations to adopt something similar intensifies.

For a country that imports nearly all its AI infrastructure, being on the wrong side of a supply chain that now includes government checkpoints is not ideal.

The real question

The administration insists any review system wouldn’t block model releases — just give the government a first look. But the gap between “we’re just checking” and “you can’t ship this” is a regulatory judgement call that expands over time. Every government review process in history has grown beyond its original scope.

The beautiful baby isn’t being stopped. It’s just getting a babysitter. And babysitters have a way of making rules.


SOURCES

Sources: New York Times, Reuters, Semafor