A government building facade with a large digital display showing text being erased line by line, muted colors, documentary style, overhead shot
Breaking News

The US Government Quietly Deleted Big Tech's AI Safety Pledges — and Nobody's Talking About It

The US Commerce Department silently deleted the details of a voluntary AI safety testing agreement with Google, Microsoft, and xAI. No explanation. No announcement. Just gone.

US Commerce DepartmentAI safetyAI regulationGoogleMicrosoft

The page was there. Then it wasn’t. And that’s the whole story.

Sometime in early May 2026, the US Commerce Department removed from its website the details of a voluntary agreement under which Google, Microsoft, and xAI committed to submitting new AI models to government scientists for pre-release security testing. The page — which outlined the specific commitments these companies made — was deleted without explanation, without a press release, and without any replacement.

Reuters first reported the deletion. The Next Web and CNA picked it up. And then, largely, everyone moved on.

They shouldn’t have.

🔍 THE BOTTOM LINE

When the government silently removes AI safety commitments from public view, it’s not housekeeping — it’s a signal. And the signal is: voluntary oversight is expendable.


What Was Deleted

On May 5, 2026, the Commerce Department announced that Google, Microsoft, and xAI had agreed to submit their new AI models to US government scientists for pre-release security testing. This was part of the voluntary framework that the Biden-era NIST AI Safety Institute had been building — a rare example of Big Tech agreeing to oversight, even if it was non-binding.

The specific page that was deleted contained the details of those commitments. What models would be tested. What the testing process looked like. What the government scientists would be looking for.

We know this because the Internet Archive captured the page before it disappeared. But the official government record is now gone.

Why It Matters

This deletion fits a pattern. The Trump administration has been systematically rolling back AI oversight:

  • The NIST AI Safety Institute has been gutted and restructured
  • The GUARD Act, which would create government monitoring of AI systems, is being narrowed in Congress
  • Voluntary commitments from AI companies have been treated as optional, temporary, and apparently deletable

The problem isn’t just the deletion — it’s the message it sends. If voluntary safety commitments can be erased from the public record without explanation, were they ever meaningful? These agreements were already non-binding. Now they’re also non-remembered.

What is the value of a safety pledge that can be memory-holed?

The Transparency Gap

Here’s what makes this particularly galling: these are the same companies currently in an AI cybersecurity arms race. Microsoft just launched MDASH, a system that found 16 previously unknown Windows vulnerabilities. OpenAI launched Daybreak the same day Google caught the first AI-built zero-day. Anthropic’s Mythos is finding hundreds of vulnerabilities in major software.

AI capabilities are accelerating. The companies building these systems agreed — voluntarily, briefly, now apparently reversibly — to let the government test them before release. And the government just deleted the record of that agreement.

The NZ Angle

New Zealand doesn’t have a voluntary AI safety testing framework. We don’t have a mandatory one either. The updated AI Blueprint for Aotearoa talks about responsible AI adoption, but there’s no equivalent of NIST or a government body with the authority to test AI models before deployment.

When the US — the country where most frontier AI is built — can’t even maintain a voluntary testing agreement on its website, it underscores why small countries need their own safeguards. You can’t outsource AI safety oversight to a partner who keeps erasing the oversight.

The Bigger Picture

The timing is the thing. In the same week:

  1. Microsoft proves AI defense works at enterprise scale with MDASH
  2. Google catches the first AI-built zero-day exploit in the wild
  3. The US government deletes the record of voluntary AI safety testing commitments
  4. China drafts specific regulations for agentic AI requiring human oversight

Three of those four developments point toward more oversight, more transparency, more guardrails. The fourth — the deletion — points the other way. And it’s the one with the power of the state behind it.


🔍 THE BOTTOM LINE

The US had a voluntary, non-binding, Big Tech self-reported AI safety testing framework. It was already the bare minimum. Now even that minimum has been deleted from the official record. If safety commitments can be erased without explanation, they were never commitments — they were press releases.


❓ Frequently Asked Questions

Q: What exactly was on the deleted page? The specific commitments from Google, Microsoft, and xAI about submitting new AI models to government scientists for pre-release security testing. The agreement was announced May 5, 2026, and the details page was removed sometime after that without explanation.

Q: Is this related to the GUARD Act? Separate but parallel. The GUARD Act is legislation working through Congress that would create AI monitoring authority. This deletion is about an executive-branch voluntary framework that already existed. Both fit a pattern of reducing AI oversight in the US.

Q: What does this mean for NZ? NZ has no equivalent framework. When the country building most frontier AI is erasing its safety commitments, NZ needs to build its own testing and oversight capability. The AI Blueprint refresh is a start, but capability requires funding and authority.

Q: Can the commitments still exist even if the page is gone? Technically, the companies could still honour them voluntarily. But removing the public record removes accountability — and without accountability, voluntary commitments are just words.


Sources

  • Reuters
  • The Next Web
  • CNA
  • Technology.org
  • Internet Archive
Sources: Reuters, The Next Web, CNA, Technology.org