There are two camps in the AGI debate. One says we just need to keep scaling — more compute, more data, more parameters — and general intelligence will emerge. The other says we’re missing something fundamental, and no amount of scaling will bridge the gap.
Keith McCormick, visiting professor at the University of Virginia’s Darden School of Business, just put a name on that missing something. He calls it “normal science” — and he thinks AI is stuck in it.
The Kuhn framework
McCormick draws on Thomas Kuhn’s 1962 landmark The Structure of Scientific Revolutions. Kuhn made a distinction that’s become foundational in the philosophy of science:
- Normal science is the day-to-day grind of improving existing ideas — testing, refining, filling gaps, scaling what already works. Most scientists spend most of their time here. It’s essential, productive, and deeply unglamorous.
- Revolutionary science is the paradigm shift — the moment when an accumulated pile of unsolved problems forces a fundamentally new way of seeing things. Einstein replacing Newton. Darwin replacing creationism. These moments are rare, and they don’t come from working harder within the old framework.
McCormick’s argument is straightforward: AI is exceptionally good at normal science. It can process vast amounts of information, identify patterns, optimise existing approaches, and generate useful outputs across an enormous range of tasks. But the creative leap — the ability to look at a problem and see something nobody has seen before, to approach it from an entirely new angle — remains beyond current systems.
The anomaly test
Here’s where it gets practical. Kuhn said you know a paradigm is reaching its limits when anomalies start piling up — problems that should be solvable within the framework but aren’t.
McCormick points to some familiar ones: LLMs that forget earlier parts of a conversation, struggle with spatial reasoning, or generate confidently wrong answers. These aren’t minor bugs. They’re the kind of persistent failures that suggest something about the architecture itself is limited.
Even more telling is how users respond. “When something goes wrong, we tend to blame ourselves,” McCormick notes. “We think, ‘I must have written a bad prompt.’” That instinct — assuming the system is fine and the human is the problem — is exactly what Kuhn observed in paradigms that are still defended despite mounting contradictions.
Why scaling isn’t enough
The scaling camp argues that these limitations will disappear with more compute and more data. McCormick isn’t buying it.
His position: the current transformer architecture can be improved indefinitely within its own paradigm. It will keep getting better at normal science — more accurate, more capable, more useful. But the next leap to general intelligence requires something new, the same way quantum mechanics required something new beyond classical physics. You don’t get there by doing classical physics harder.
“There will be enough things that it can’t do that we’ll need something new,” McCormick says. “That breakthrough may come in the next few years, or it may take longer.”
The low-code lesson
McCormick’s not just an AI theorist. He’s spent three decades building predictive analytics models across industries, and his most striking insight might not be about AGI at all.
Working with Citibank on the low-code platform KNIME, McCormick found that the most undervalued benefit of low-code tools wasn’t speed or simplicity — it was governance. The alternative to low-code wasn’t elegant Python scripts. It was chaotic, unauditable Excel spreadsheets built by business teams without technical support.
“It’s not really a choice between some low-code tool that the data team is unfamiliar with or Python,” McCormick said. “It’s either Excel — and they really don’t tell anybody how they did it — or it’s a tool like this.”
The lesson: the most important function of a tool isn’t always what it does. Sometimes it’s what it replaces, and what it makes visible.
What this means for the AGI timeline
McCormick’s framework doesn’t say AGI is impossible. It says the path to AGI isn’t a straight line from GPT-4 to something bigger. It requires a paradigm shift — a fundamentally new approach that nobody has found yet.
For businesses and policymakers betting on AGI timelines, this is a dose of realism that doesn’t require dismissing AI’s current capabilities. AI is already transforming how we do normal science. It’s already useful. The question isn’t whether AI is powerful — it clearly is. The question is whether the architecture that got us here is the one that gets us all the way there.
The honest answer: nobody knows. But if Kuhn is right, the clues will come from the anomalies piling up, not from the benchmarks being passed.
SOURCES
- UVA Darden School of Business — AI Is Great at “Normal Science,” but Breakthroughs Still Belong to People