Three Titans Pivot Away from LLMs: LeCun, Musk, and Fei-Fei Li Bet on World Models

In the first quarter of 2026, three of AI's most influential figures all made the same strategic bet: large language models have hit a ceiling. The future belongs to systems that understand the physical world.

Yann LeCun left Meta to raise $1 billion for world models. Elon Musk fired every xAI cofounder to rebuild from scratch. Fei-Fei Li launched World Labs for spatial intelligence. None of this is coincidence.

Yann LeCun: "The Industry Took a Wrong Turn"

In November 2025, Yann LeCun left his position as Meta's Chief AI Scientist with a pointed assessment: the entire industry had taken a wrong turn. LLMs, he argued, cannot reason. They cannot plan. They cannot learn from experience. They predict text, not reality.

By March 2026, LeCun had raised $1.03 billion for AMI Labs (Autonomous Machine Intelligence), the largest seed round in European history. His mission: build AI that understands the physical world through "world models" — internal representations that predict consequences of actions, not just sequences of words.

On March 23, 2026, LeCun and collaborators released the LeWorldModel (LeWM) paper, detailing a Joint-Embedding Predictive Architecture (JEPA) that trains stably from raw pixels without the hand-holding heuristics previous models required. The system achieves 48× faster planning than existing approaches, representing observations with 200× fewer tokens than foundation-model-based alternatives.

The key innovation: preventing "representation collapse" — where models produce redundant embeddings to trivially satisfy prediction objectives — through a mathematical regularizer that ensures high-dimensional latent embeddings stay diverse and Gaussian-distributed.

"LLMs are fundamentally limited. They have no understanding of the physical world. They cannot reason about cause and effect. World models are the path forward."

— Yann LeCun, AMI Labs

Elon Musk: "Wasn't Built Right"

On March 13, 2026, Elon Musk admitted something few CEOs ever say about a company valued at $250 billion: xAI "wasn't built right the first time around."

The admission came after a cascade of departures. In early 2026, every single one of xAI's 11 cofounders left the company. Jimmy Ba, who co-authored the most-cited paper in AI history. Igor Babuschkin from DeepMind. Christian Szegedy from Google. Tony Wu, who led the reasoning team. All gone.

The timing aligned with corporate restructuring. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that valued the combined entity at $1.25 trillion. Weeks earlier, Tesla invested $2 billion in xAI. Tesla shareholders sued, arguing Musk directed their capital into his own venture.

But the technical reality was blunt: xAI's products, particularly its coding tools, were not competitive with Anthropic's Claude Code or OpenAI's Codex. Musk acknowledged the system needed to be rebuilt from scratch.

The cofounders' departure speaks volumes. When the leadership admits the product failed, researchers with alternatives don't stay. Meta is offering packages worth $300 million over four years to retain top AI talent. OpenAI, DeepMind, and Anthropic are all expanding aggressively. The entire cohort that built xAI is now elsewhere.

Fei-Fei Li: "From Words to Worlds"

In February 2026, Fei-Fei Li — the researcher who created ImageNet and helped launch the deep learning revolution — raised $1 billion for World Labs. The company's mission: spatial intelligence.

"From words to worlds," Li wrote. "AI's next frontier is understanding and interacting with the physical world."

World Labs generates interactive 3D environments from single images. The system understands spatial relationships, object permanence, and physical constraints in ways LLMs cannot. Where GPT models predict text, World Labs predicts geometry, lighting, and occlusion.

The convergence is striking. LeCun left Meta for world models. Musk is rebuilding xAI. Li launched World Labs for spatial intelligence. All three arrived at the same conclusion: LLMs that predict text cannot reason about reality.

The Technical Argument

Why are world models different? LeCun's LeWM paper provides a clear technical contrast:

LLMs: Train on text, predict next tokens, have no internal model of physical reality. They hallucinate because they cannot verify against world state.

World Models: Train on sensory data (pixels, audio, tactile), predict consequences of actions in latent space, maintain internal representations that match physical reality. They can reason about cause and effect.

The LeWorldModel paper demonstrates this through "violation-of-expectation" tests. The system assigns higher surprise to physical impossibilities like teleportation, but not to visual changes like color shifts. It has learned what is physically plausible.

This is what LLMs cannot do. They've never seen the physical world. They've only read about it.

The IBM Parallel

In 1981, IBM built the PC industry. Microsoft captured the value. IBM had the hardware, Microsoft had the operating system that became the platform.

In 2026, Big Tech is spending $700 billion on AI infrastructure. But who captures the value? The companies with the most users get the best training data. The best training data produces the best models. The best models attract more users. It's a data flywheel.

But there's a deeper threat: world models might make LLMs obsolete. If LeCun, Musk, and Li are right, the companies betting everything on text prediction are building the wrong thing. IBM built a great PC. Microsoft built the platform.

What This Means

The industry's smartest researchers are voting with their feet. LeCun didn't leave Meta for another LLM company. He left to build world models. Musk didn't fire his cofounders to improve Grok. He fired them to rebuild for a different architecture.

World models require different training, different infrastructure, different thinking. They learn from interaction, not just text. They predict consequences, not just completions. They might be harder to build, but they might also be the only path to AGI that can actually reason.

The LLM era isn't over. GPT-5, Claude, and Gemini will keep improving. But the researchers who defined the last decade of AI are now building the next one somewhere else.

Sources

Share this article
𝕏 LinkedIn Facebook ✉ Email

This article reflects our analysis and opinion based on publicly available information at the time of publication. The AI landscape evolves rapidly. Verify important claims independently. Views expressed are those of Singularity.Kiwi editors.