Federal courthouse interior with witness stand and legal exhibits, warm courtroom lighting, documentary style
News

Musk Admits xAI Distills OpenAI's Models: The Trial Week That Broke the AI Feud Wide Open

Musk took the stand claiming he was duped by OpenAI. Then he admitted xAI distills their models. The courtroom gasped.

Elon MuskSam AltmanOpenAIxAIAI Trial

Elon Musk walked into a federal courthouse in Oakland wearing a crisp black suit and tie, and walked out having told a jury that his $1.75 trillion AI company trains its models by copying the company he’s suing for being a threat to humanity.

You can’t write this stuff. Well, you can. But nobody would believe it.

The Musk v. Altman trial kicked off this week in Oakland, and it delivered more drama than anyone predicted. Protesters lined the streets outside, carrying signs urging people to quit ChatGPT or boycott Tesla or both. Inside, armies of lawyers carried boxes of exhibits while a handful of nervous OpenAI employees watched from the gallery.

Musk is asking the court to remove Sam Altman and Greg Brockman from their roles and unwind the restructuring that made OpenAI a for-profit company. The outcome could derail an IPO that’s reportedly targeting a $1 trillion valuation. Meanwhile, xAI is expected to go public as part of SpaceX as early as June, at a target valuation of $1.75 trillion.

So: the man suing to “save” a nonprofit is simultaneously taking his own for-profit AI company public at nearly twice the valuation. Got it.


”I was a fool”

Musk told the jury he was “a fool who provided them free funding to create a startup.” He said he donated $38 million to a nonprofit developing AI for the benefit of humanity, not to make the executives rich. “I gave them free funding, which they then used to create what would become an $800 billion company.”

He described “three phases” in his relationship with OpenAI. Phase one: “enthusiastically supportive.” Phase two: “I started to lose confidence that they were telling me the truth.” Phase three: “I’m sure they’re looting the nonprofit.”

The breaking point came in late 2022 when he learned Microsoft was investing $10 billion. “I texted Sam Altman, ‘What the hell is going on? This is a bait and switch,’” he testified.


The admission that made the courtroom gasp

OpenAI’s lawyer, William Savitt — who once represented Musk and Tesla — cross-examined Musk with surgical precision. He pressed Musk on whether xAI was really committed to safety, pointing out that xAI sued Colorado in April over an AI law designed to prevent algorithmic discrimination.

Then Savitt asked about xAI’s relationship with OpenAI’s technology. Musk admitted that xAI “partly” distills OpenAI’s models. Some people in the courtroom gasped.

Distillation is a technique where a smaller model is trained to mimic the outputs of a larger, more capable model. It’s how you build a cheaper, faster product on the back of someone else’s expensive research. In February, OpenAI accused Chinese company DeepSeek of doing exactly this to their models. In August 2025, Wired reported that Anthropic blocked OpenAI’s access to Claude for violating terms of service.

“It is standard practice to use other AIs to validate your AI,” Musk said.

Let that sink in. The man who says OpenAI is a danger to humanity, who is suing to dismantle the company, whose entire legal argument rests on the idea that OpenAI betrayed its mission — that man’s own company is literally copying OpenAI’s homework. And when caught, he calls it “standard practice.”


Who is the steward of AI safety?

This is the question the trial keeps circling back to, and it’s the one that matters most. Musk painted himself as a champion of AI safety. “The worst-case scenario is a Terminator situation where AI kills us all,” he told the jury.

Judge Yvonne Gonzalez Rogers was having none of it. “Despite these risks, your client is creating a company that’s in the exact space,” she said, referring to xAI. “I suspect there’s plenty of people who don’t want to put the future of humanity in Mr. Musk’s hands.”

Musk’s lawyer, Steven Molo, shot back: “We all could die as a result of artificial intelligence!” The judge snapped: “This is not a trial on whether or not artificial intelligence has damaged humanity.”

The irony is thick enough to cut with a subpoena. Musk claims he’s suing to protect humanity from unsafe AI, while his own company — the one he claims isn’t a real competitor — is building its products by copying the very AI he says is dangerous. If OpenAI’s models are a threat to civilisation, why is xAI training on them?


The poaching problem

Savitt also produced emails showing Musk poached OpenAI employees for his own companies. In 2017, after hiring OpenAI founding member Andrej Karpathy for Tesla, Musk emailed a VP: “The OpenAI guys are gonna want to kill me. But it had to be done.”

He also emailed a Neuralink cofounder that they could “hire independently or directly from OpenAI.”

“It’s a free country,” Musk said when pressed. “I can’t restrict their ability to hire people from other companies.”

Musk left OpenAI in 2018. He founded xAI in 2023. He says xAI isn’t currently tracking to reach AGI first. But he’s taking it public at $1.75 trillion. Make of that what you will.


Why this matters

This trial is not really about OpenAI’s nonprofit structure. It’s about who gets to control the AI industry — and who gets to profit from it. Musk is arguing that OpenAI betrayed its mission by going for-profit. But he’s doing the same thing, at a larger scale, while simultaneously using OpenAI’s technology to build his product.

For New Zealand, the implications are indirect but real. The outcome of this trial will shape the competitive dynamics of the AI industry globally. If Musk wins, it could force OpenAI back to a nonprofit model — or at least extract a massive settlement. Either way, it reshapes who holds power in AI development.

But the bigger story is the distillation admission. If it’s “standard practice” for AI companies to train on each other’s outputs, then the intellectual property foundations of the entire industry are shakier than anyone wants to admit. Every model is built, at least partly, on every other model. The moats are thinner than the valuations suggest.

Next week, UC Berkeley computer scientist Stuart Russell will testify about AI safety. Greg Brockman — who’s been taking notes throughout Musk’s testimony — will also take the stand.

Week one set the tone. Week two might set the verdict.


SOURCES

  • MIT Technology Review — “Musk v. Altman week 1: Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI’s models”
  • TechCrunch — “Musk v. Altman” trial coverage
Sources: MIT Technology Review, TechCrunch