Google’s Gemini AI model is now authorised for use inside classified US military networks — and the company deleted its own ethics principles to make it happen.
What’s actually happening
The US Department of Defence has expanded its existing partnership with Google, moving Gemini from unclassified systems (where it was already powering the GenAI.mil platform for 1.3 million personnel) into classified environments at Impact Level 5 — the tier for sensitive national security data.
This isn’t a small upgrade. IL5 classification means Gemini will now handle intelligence analysis, logistics planning, and operational decision-making inside networks that carry actual military secrets.
The Pentagon is playing a multi-vendor game here: OpenAI and xAI are also in the mix. Anthropic, notably, is not — they reportedly refused to participate in military applications.
The ethics deletion that preceded this
Remember Google’s famous 2018 AI principles? The ones that said the company wouldn’t develop AI for weapons or surveillance? They were quietly removed earlier this year.
Over 580 Google employees — including directors — signed a letter urging CEO Sundar Pichai to reject the Pentagon deal, citing the risks of deploying AI in opaque classified environments. Management called it a “proud moment.”
When your own workforce of AI researchers says “don’t do this” and you do it anyway, that’s not a disagreement — that’s a decision.
The scale is staggering
The DoD’s AI budget request for FY2027 is $54.6 billion, up from $13.4 billion. That’s a four-fold increase. Gemini in classified networks is just one piece of an enormous military AI buildout that’s happening faster than most people realise.
Google says the deal covers “any lawful government purpose” — a phrase that covers a lot of ground, from logistics optimisation to intelligence analysis to things they probably can’t talk about.
Why this matters beyond the US
New Zealand is a Five Eyes partner. US military AI capabilities flow directly into the intelligence-sharing arrangements that affect our own defence and security posture. When the Pentagon’s AI gets better at analysis, our intelligence agencies get access to better tools too.
But the ethical questions don’t have borders. If Google is willing to delete its own red lines for the Pentagon contract, what happens when a less democratic government makes a similar offer?
🔍 The Bottom Line
This is the moment the AI industry’s “don’t be evil” era officially ended — not with a bang, but with a delete key. The Pentagon now has Gemini inside classified networks, Google has a cheque, and 580 employees have learned that internal ethics don’t survive contact with military budgets. The question isn’t whether AI will be used in warfare. It already is. The question is whether anyone will remember the principles they abandoned along the way.
Sources:
- NBC News
- CNBC
- X/Twitter