James Baker spent a decade as the director of the Pentagon’s Office of Net Assessment — the legendary internal think tank founded in 1973 to figure out what’s coming before it arrives. Last month, the Trump administration shuttered it. Last week, the Pentagon signed seven AI companies to replace Anthropic on classified networks. And on Friday, Anthropic announced Baker is joining them as “strategist-in-residence.”
Timing matters. But so does the man.
Who Is James Baker?
ONA isn’t your average government shop. Founded by Andrew Marshall under Nixon, it was the Pentagon’s long-range forecasting unit — the people who predicted that information technology would make warfare cheaper, faster, and more precise decades before drones proved them right in Ukraine.
Baker led ONA from 2015 until the office was temporarily closed in 2025. He advised defense secretaries and national security advisors on how emerging technology — especially AI — would reshape national security. A 2016 ONA study (later declassified through Harvard’s Belfer Center) identified a “Cambrian explosion” in robotics and AI that would undercut the advantage of expensive platforms like $90 million fighter jets.
He was right. Cheap drones are now sinking Russian naval assets. The question is what he sees coming next — and why he thinks Anthropic is where he needs to be to work on it.
The Stakes
Baker isn’t joining Anthropic to write blog posts. He’ll lead analysis of how AI is affecting U.S. institutions and competition with China. In his first interview since the appointment, he told Defense One:
“We aren’t spending enough time thinking about the implications of recursive self-improvement. The greatest risk is the long-term viability of present institutions in war and in peace. That’s one of the questions I came to Anthropic to work on. It’s a multi-decade structural — even civilizational — problem.”
Let that sink in. The man who spent ten years thinking about civilizational threats for the Pentagon just said the biggest one isn’t a foreign military. It’s AI’s capacity to outpace the institutions we’ve built to manage it. And he went to Anthropic — the company currently blacklisted by the Pentagon as a “supply chain risk” — to work on that problem.
Digging In or Pivoting?
This hire has two possible readings, and they’re not mutually exclusive:
Reading 1: Anthropic is reinforcing its position. Baker brings deep institutional knowledge of how the Pentagon thinks, who makes decisions, and what levers get pulled. If you’re a company that just lost $5 billion in potential contracts because you refused to build mass surveillance tools or guide autonomous weapons, having someone who speaks fluent Pentagon is just smart. Baker helps Anthropic navigate the defense relationship without compromising safety commitments.
Reading 2: The line is shifting. Baker’s language about “recursive self-improvement” and “civilizational” challenges mirrors Anthropic’s own safety framing. But it also sounds like someone preparing the ground for a more nuanced stance. The Pentagon didn’t just blacklist Anthropic and move on — they’re still in talks about the Mythos model. Baker’s hire could signal Anthropic is building the expertise to re-engage with defense on their own terms — not capitulating, but not staying out either.
Here’s what makes this genuinely interesting: Baker himself said the U.S. has “a tight time window to adapt” to AI as a civilizational challenge. He didn’t say the military has a tight window. He said the U.S. — meaning every institution, not just the ones with guns. That framing aligns more with Anthropic’s worldview than the Pentagon’s.
The NZ Angle
New Zealand is a Five Eyes partner. When the Pentagon reshapes its AI strategy, NZ’s intelligence apparatus — the GCSB, the NZDF — follows the same contours. If Baker’s analysis at Anthropic shapes how the U.S. thinks about AI institutional risk, it’ll influence Wellington too, whether anyone there notices or not.
The difference is that NZ has no domestic AI industry capable of pushing back. No Anthropic equivalent. No one saying “we won’t build mass surveillance.” The GCSB’s controversial surveillance expansion passed quietly compared to the U.S. debate. If AI institutional risk is the civilizational challenge Baker describes, smaller nations without their own AI labs are the most exposed — and the least prepared.
What to Watch
- Does Baker’s appointment change the tone of Anthropic’s federal engagement? Mythos talks are ongoing.
- The Pentagon reinstated a smaller version of ONA in October. Watch whether Baker’s departure changes its AI focus.
- China’s AI strategy is accelerating while the U.S. argues with its own companies. The seven-vendor deal replaces Anthropic, but it doesn’t fill the institutional analysis gap ONA left.
🔍 THE BOTTOM LINE
James Baker didn’t leave the Pentagon for a tech job. He left because the institution he served for a decade can’t — or won’t — think clearly about what’s coming. Anthropic, for all its challenges, is one of the few places that takes civilizational AI risk seriously enough to actually work on it. Whether this hire helps Anthropic hold the line or gradually shift it is the story to watch. The man who mapped the future of warfare just bet his career on the idea that the real war is against our own inability to adapt.
Sources
- Defense One — “Former head of ‘Pentagon’s think tank’ joins Anthropic”
- Hacker News — Discussion thread