When AI critic Gary Marcus calls something “the single biggest advance in AI since the LLM,” people pay attention. When he says it’s not because of scaling, people get confused. When the evidence comes from a leaked source code file with 486 if-then branches, people start asking very different questions about where AI is actually headed.
On April 11, 2026, Marcus published a Substack essay titled “The biggest advance in AI since the LLM” — and his argument is as disruptive as the claim sounds.
What Marcus Actually Said
Marcus’s thesis is straightforward: Claude Code is not a pure LLM. It’s not pure deep learning. Not even close.
The proof is in the leaked source code. On March 31, Anthropic accidentally published the full source of Claude Code to npm, exposing approximately 512,000 lines of TypeScript across 1,906 files. Security researchers pored over it. Most coverage focused on the leak itself — the security implications, the proprietary code exposure.
Marcus looked at the architecture. And what he found in the core kernel — a 3,167-line file called print.ts — was a pattern-matching system built not with neural networks, but with deterministic symbolic logic: 486 IF-THEN conditional branches across 12 levels of nesting.
In Marcus’s words: “Pattern matching is supposed to be the strength of LLMs. But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic.”
Instead, Anthropic built the kernel using techniques straight out of classical symbolic AI — the kind that John McCarthy, Marvin Minsky, and Herb Simon would have recognised instantly. The kind that Marcus has been advocating for 25 years.
Why This Matters — Beyond the Leak
The implications go well beyond one company’s source code.
1. Scaling alone isn’t enough. This is the point Marcus has been making for decades, and now one of the world’s leading AI companies has implicitly agreed. When Anthropic needed Claude Code to be reliable — not just impressive — they reached for symbolic AI, not bigger neural networks.
2. Neurosymbolic AI is already here. Marcus lists the evidence: AlphaFold, AlphaEvolve, AlphaProof, AlphaGeometry, Code Interpreter. All neurosymbolic. When you call code in ChatGPT, you’re asking symbolic AI to handle an important part of the work. The pattern is consistent across the most capable AI systems.
3. The capital allocation question is real. If smartly combining symbolic AI with neural networks delivers more capability per dollar than scaling alone, then the hundreds of billions flowing into bigger GPU clusters and larger training runs may be partially misallocated. Marcus puts it directly: “Smartly adding in bits of symbolic AI can do a lot more than scaling alone.”
The Counterarguments
Marcus’s essay generated significant debate. The main objections fall into three categories:
“It’s just engineering scaffolding, not a paradigm shift.” Some researchers argue that the if-then branches in Claude Code are operational plumbing — prompt routing, error handling, output formatting — not core reasoning. The LLM still does the heavy lifting; the symbolic code is just guardrails.
“Neurosymbolic AI is old news.” Yes, the concept dates back decades. Marcus himself wrote The Algebraic Mind in 2001 and debated Yoshua Bengio about it in 2019. The question isn’t whether neurosymbolic AI is new as an idea — it’s whether the field is finally adopting it at scale. Marcus says yes; critics say the jury is still out.
“This doesn’t prove scaling is dead.” Even if Claude Code uses symbolic components, it still runs on top of a large language model. The symbolic layer augments the neural layer — it doesn’t replace it. Scaling may still be necessary, even if it’s no longer sufficient.
Marcus acknowledges this last point. Claude Code “ain’t perfect, or even close,” he writes. The symbolic code part is “a mess.” He’s argued since Rebooting AI (2019) that software engineering quality matters as much as architectural choices. But the direction is clear.
The Bigger Picture
This debate isn’t academic. It has direct implications for:
- AI investment: If neurosymbolic approaches deliver more capability per dollar, venture capital and corporate R&D budgets should shift accordingly
- AI safety: Deterministic symbolic components are more auditable and predictable than neural networks — a significant advantage for alignment
- AI policy: Regulators assessing AI risk should understand that the most capable systems are already hybrids, not pure neural networks
- AI workforce: The skills needed to build neurosymbolic systems (formal logic, knowledge representation, expert systems) are quite different from pure ML engineering skills
The most striking line in Marcus’s essay isn’t about Claude Code at all. It’s this: “The paradigm has changed.” He’s not predicting a future shift. He’s describing a present reality that most of the AI industry hasn’t caught up to — because admitting that symbolic AI plays an essential role undermines the scaling-only narrative that’s driven hundreds of billions in investment.
What Comes Next
Marcus points readers to his 2020 essay The Next Decade in AI, where he laid out a roadmap beyond neurosymbolic AI: knowledge-driven, reasoning-rich, world-model-grounded systems. Neurosymbolic AI is the starting point, not the destination.
The Claude Code leak gives us a window into what one leading lab is actually building — not what they’re advertising. And what they’re building relies on techniques that the dominant narrative said were obsolete.
That should concern anyone who’s been told that bigger models are the only way forward.
Sources
- Gary Marcus, “The biggest advance in AI since the LLM,” Marcus on AI (Substack), April 11, 2026
- Ars Technica — “Entire Claude Code CLI source code leaks thanks to exposed map file” (March 2026)
- Gary Marcus, The Algebraic Mind (2001), Rebooting.AI (2019)
- Gary Marcus, “The Next Decade in AI” (2020)