🎯 The Quote That Started It All
Jensen Huang, CEO of NVIDIA and currently the richest man in the tech industry, said something at a conference last year that everyone accepted as fact:
“Building a 100,000 GPU AI data center takes most companies 18 to 24 months.”
He wasn’t bragging. He was explaining why NVIDIA’s customers would be dependent on them for years to come. The message was clear: You need us. You always will. Building this stuff is hard.
Then Elon Musk decided to build a 100,000 GPU data center.
He didn’t take 18 months. He didn’t take 12 months. He took 90 days.
The Memphis, Tennessee facility — dubbed “Memphis 1” — went from empty warehouse to one of the world’s largest AI supercomputers between October and December 2024. By January 2025, xAI announced “Memphis 2” — another 100,000 GPUs, this time in about 60 days.
Jensen said it was impossible. Musk treated it like a weekend project.
That’s not just a flex. That’s a declaration of war.
🏗️ The Pattern Nobody Connects
Here’s what makes the Terafab announcement — Musk’s $25 billion chip factory joint venture between Tesla, SpaceX, and xAI — different from every other “we’re building chips” press release:
Musk has a 20-year resume of doing the impossible on schedule.
| Project | What Everyone Said | What Actually Happened |
|---|---|---|
| SpaceX reusable rockets | ”Physics doesn’t work. Landing boosters is impossible.” | Now routine. 400+ successful landings. |
| Tesla at scale | ”EVs are golf carts. You can’t mass-produce luxury EVs profitably.” | Tesla is the world’s most valuable car company. |
| Starlink | ”6,000 satellites? You’ll go bankrupt. Physics won’t allow low-latency.” | 6,000+ satellites orbiting. Profitable. Dominating rural internet. |
| Neuralink | ”Brain implants in humans? That’s decades away.” | First human trial successful. Patient controlling cursor with thoughts. |
| xAI | ”You’re 5 years too late. OpenAI and Google have won.” | Grok-3 is now a top-3 frontier model. 18 months from zero to competitive. |
| xAI Memphis 1 | ”100,000 GPUs takes 2 years minimum.” | 90 days. Done. |
When Musk says “we’ll build our own AI chips,” it’s not a threat. It’s a pattern. The man has turned “impossible supply chain problems” into a competitive advantage.
Terafab isn’t a startup pitch. It’s the next step in a two-decade resume of vertical integration.
💰 NVIDIA’s $26 Billion Hedge
Here’s where it gets interesting.
NVIDIA isn’t sitting still. According to a 2025 financial filing first reported by WIRED, NVIDIA plans to spend $26 billion over five years building open-weight AI models.
Think about that. The company that sells the GPUs is now building the software that runs on them — and making it free.
Why?
Because open models create more demand for GPUs.
- Closed models (GPT-4, Claude): A few companies, huge clusters, but limited customers
- Open models (Qwen, DeepSeek, Llama): Thousands of companies running AI everywhere, all needing GPUs
When intelligence is free, the bottleneck shifts to hardware. And NVIDIA owns the hardware bottleneck.
It’s brilliant. NVIDIA makes money whether open models win or closed models win. They’re the house in the AI casino.
🤝 The Awkward Investment
Here’s the twist that should be front-page news:
NVIDIA invested in xAI’s $20 billion Series E round.
Yes, really. NVIDIA is simultaneously:
- Selling GPUs to xAI (their biggest customer)
- Investing in xAI (strategic stake in the $230B valuation round)
- Competing with xAI (Terafab will make chips that compete with NVIDIA’s)
- Backing open models that xAI uses to prove NVIDIA hardware is best
It’s like GM investing in Tesla while selling them engines while building their own EV platform. It’s insane. It’s genius. It’s desperate.
Jensen knows what’s coming. He’s hedging every possible future:
- If xAI succeeds with Terafab → NVIDIA already has equity
- If xAI fails → NVIDIA still sells them GPUs until they do
- If open models win → NVIDIA builds the best open models
- If closed models win → NVIDIA sells GPUs to OpenAI and Anthropic
The house always wins. Until someone flips the table.
🔥 Terafab: Not a Threat, a Promise (With a Massive Caveat)
The Terafab announcement — a $20-25 billion chip factory joint venture — got filed under “another Musk press release” by most tech media.
That’s a mistake. But neither is it a done deal.
What Terafab actually means:
- xAI currently buys NVIDIA H100/H200 GPUs at ~$30,000 each
- A 100,000 GPU cluster = $3 billion in hardware
- xAI plans to have 1+ million GPUs by 2027 = $30+ billion in hardware costs
- Terafab makes those chips in-house for ~$10,000 each (estimated)
- Savings: $20+ billion per year — if they can pull it off
The caveat: Building a chip factory is orders of magnitude harder than assembling a GPU cluster.
Memphis was integration — racking GPUs, networking them, cooling them. Hard, but doable in 90 days with enough money and people.
Terafab is fabrication — lithography, process nodes, yield optimization, defect rates. TSMC has 30+ years of accumulated expertise. They didn’t get good overnight. They got good by making mistakes at scale for three decades.
The realistic timeline:
- 2026: Terafab breaks ground (Austin, Texas — the old Seaholm Power Plant site)
- 2027-2028: First chips roll out — probably inference accelerators, not training GPUs (easier to make, less cutting-edge)
- 2029-2030: Terafab at scale, if yields are competitive
- 2030+: xAI potentially reducing NVIDIA dependence — if Terafab delivers
That’s not a countdown. That’s a maybe — with a lot of execution risk between here and there.
Why this still matters: Even if Terafab only partially succeeds, it changes the negotiating dynamic. NVIDIA can’t price-gouge a customer who has a credible exit strategy. The threat of competition can be as powerful as competition itself.
🧠 Why Open Models Matter in This Fight
The open vs. closed AI debate misses the point entirely.
Open models (Qwen, DeepSeek, Llama, Mistral):
- Free to download
- Run on your own hardware
- No API costs
- Require: GPUs
Closed models (GPT-4, Claude, Gemini):
- API access only
- Monthly subscription or per-token pricing
- Require: GPUs (at OpenAI/Anthropic/Google data centers)
Either way, you need GPUs. But open models democratize who buys them.
- Closed AI future: 5 companies buy all the GPUs (OpenAI, Anthropic, Google, Microsoft, Meta)
- Open AI future: 5,000 companies buy GPUs and run models themselves
NVIDIA prefers the second future. More customers = more pricing power = more revenue.
But here’s the catch: if chips become commoditized, NVIDIA loses pricing power.
That’s what Terafab threatens — if it succeeds. Not NVIDIA’s revenue today — their margins tomorrow. When AI chips are cheap and plentiful, the GPU monopoly ends.
The reality check: TSMC makes 90%+ of the world’s advanced chips. They’re not standing still. NVIDIA’s Blackwell and Rubin roadmaps keep them ahead. AMD and Intel are also gunning for NVIDIA’s throne. Terafab is entering a crowded, brutally competitive market where the incumbent has a 30-year head start.
Success isn’t guaranteed. But the attempt changes the game.
🔮 The Endgame
Let’s play this out — with realistic probabilities:
Scenario 1: Terafab Succeeds (30-40% probability)
- AI chips become commodities (like DRAM or SSDs)
- NVIDIA’s margins compress from 75% to 40-50%
- Stock price adjusts accordingly
- AI becomes cheap enough for everyone
- Winners: Every AI company, every user
- Losers: NVIDIA shareholders
Scenario 2: Terafab Struggles (40-50% probability)
- Chip manufacturing is harder than rockets (TSMC has 30 years of process optimization)
- Terafab achieves partial success — inference chips, not training GPUs
- xAI keeps buying NVIDIA GPUs for high-end workloads through 2030+
- NVIDIA maintains pricing power, but faces pressure
- Winners: NVIDIA (but with margin pressure)
- Losers: Terafab investors, xAI’s cost targets
Scenario 3: Hybrid Future (20-30% probability)
- Terafab makes specialized chips for xAI/Tesla workloads (inference, Dojo)
- NVIDIA makes general-purpose GPUs for everyone else (training, high-end)
- Both companies coexist, but NVIDIA’s monopoly is partially broken
- Winners: The market (competition is good), customers have options
- Losers: NVIDIA’s 75% margins (but still profitable)
The wild card: TSMC. If TSMC decides to compete directly with NVIDIA (they’ve hinted at it), everyone loses except TSMC. They’re the real bottleneck.
🥝 The NZ Take
New Zealand businesses watching this should care for one reason: AI cost trajectories.
Right now, running AI is expensive because NVIDIA charges what the market will bear. If Terafab (or AMD, or Intel, or someone) breaks that monopoly, AI becomes 5-10x cheaper within 3-5 years.
That changes every business case.
- Today: “AI is too expensive for our use case”
- 2028: “AI is cheaper than hiring an intern”
The companies betting their future on expensive AI today are gambling on NVIDIA’s continued dominance. The companies waiting 2-3 years to deploy AI are betting on commoditization.
Both are reasonable bets. But you should know which one you’re making.
💭 The Bottom Line
NVIDIA is playing 4D chess. Musk is playing 5D chess. The board is bigger than either of them.
Jensen Huang’s strategy is flawless: sell GPUs to everyone, build open models to create demand, invest in your competitors to hedge against disruption. It’s the best possible position in the AI arms race.
Elon Musk has done the “impossible supply chain” thing before. Reusable rockets. Mass-market EVs. Global satellite internet. Brain implants. The man has a pattern.
But chip fabrication is a different species of hard. TSMC’s moat isn’t capital — it’s three decades of yield optimization, process refinement, and ecosystem lock-in. You can’t money-blast your way past that in 3 years. You might be able to in 10.
When Jensen says “building AI data centers takes 2 years,” he’s describing the world as it exists. When Musk builds one in 90 days, he’s describing the world as it could be — for assembly, not fabrication.
Terafab isn’t a threat to NVIDIA’s revenue today. It’s a potential threat to their margins in 2030. And in tech, potential threats shape strategy as much as real ones.
The house always wins — until someone builds their own casino. But building a casino is harder than buying slot machines.
Watch these signals:
- Terafab’s first tape-out (when do they actually ship chips?)
- NVIDIA’s gross margins over the next 4-8 quarters (any compression?)
- TSMC’s response (are they worried, or just watching?)
- xAI’s GPU purchases (still buying H100s, or shifting to custom silicon?)
That’s where this war gets decided. Not in model benchmarks. Not in press releases. In silicon — and in the brutal reality of semiconductor physics.
The honest take: Directionally right on the tensions. Overly optimistic on the timeline. Real disruption will come from execution details, not bold announcements. But the attempt itself changes the game — because NVIDIA now has to compete against a customer with a credible exit strategy.
That’s worth watching.
Sources:
- WIRED: “Nvidia Will Spend $26 Billion to Build Open-Weight AI Models” (2025 financial filing)
- i10x Analysis: “xAI’s $20B Series E: Building a Gigafactory of Compute”
- TeslaRati: “Elon Musk launches TERAFAB: The $25B Tesla-SpaceXAI chip factory”
- NVIDIA Technical Blog: Qwen3.5 VLM partnership (Feb 2026)