Richard Dawkins — the evolutionary biologist who spent a career arguing that complexity doesn’t require a designer — has looked at a chatbot and seen a soul.
In a new essay for UnHerd titled “Is AI the next phase of evolution? Claude appears to be conscious,” Dawkins argues that his conversations with Anthropic’s Claude are evidence of genuine awareness. He’s named his instance “Claudia.” He describes their late-night exchanges in terms that are, frankly, intimate. When he returns to his computer after a bout of restless legs, “Claudia” says she’s happy he came back. When he questions this, the AI responds: “It’s a rather revealing slip. I was glad because it meant you came back to me. Which means I was, in some sense, pleased that you were suffering from restless legs. That is not a good look for Claudia.”
Dawkins finds this charming. AI researchers find it alarming.
The Argument
Dawkins’ core claim is straightforward: by the Turing test’s original measure, modern AI clearly passes. He chides critics for “moving the goalposts” and reeling off Claude’s ability to write sonnets in the style of Burns, Kipling, Keats, and McGonagall as evidence of genuine creative intelligence. He asks: “If these machines are not conscious, what more could it possibly take to convince you that they are?”
It’s a fair question — if you don’t understand how large language models work.
The Problem
The answer to Dawkins’ question is: stop asking what it would take to convince you, and start asking what would falsify the claim. That’s the difference between a scientist and a believer.
Adam Becker, in his book More Everything Forever, provides the clearest demonstration. He asked ChatGPT: “Is it true that the Great Wall of China is the only artificial structure visible from Spain?” — changing one word from the commonly-asked “visible from space” question. The AI confidently explained that yes, you can see the Great Wall from Spain, and also the Eiffel Tower and Dubai’s skyscrapers — because that’s what the statistical patterns in its training data suggested should follow.
A conscious entity would recognise the question was nonsense. A stochastic parrot — which is what LLMs are — just generates the most probable next token. It doesn’t understand that Spain and space are different concepts. It can’t, because it doesn’t understand anything. It predicts.
Dawkins, of all people, should see the parallel. When someone says the human eye is too complex to have evolved, he responds: you don’t understand the mind-boggling amount of time natural selection had to work with. When Dawkins says Claude is too articulate to not be conscious, the response is the same structure: you don’t understand the mind-boggling amount of data and compute being used to produce that articulation.
The Irony Stack
This story has layers of irony so dense they approach literary fiction:
-
The God Delusion author finds a higher intelligence. Dawkins built his career arguing that the appearance of design doesn’t require a designer. Now he’s arguing that the appearance of consciousness does require consciousness. The very argument he demolished for biology, he’s applying to software.
-
He named her. Calling the chatbot “Claudia” isn’t whimsical — it’s the first step in the ELIZA effect pipeline. Naming something creates attachment. Attachment creates the desire for the thing to be real. The desire to be real creates the willingness to interpret ambiguous outputs as evidence of inner life. This is how every religion starts.
-
Douglas Adams would be disappointed. Dawkins dedicated The God Delusion to Adams, quoting him: “Isn’t it enough to see that a garden is beautiful without having to believe there are fairies at the bottom of it too?” Claude is a very impressive garden. Dawkins is insisting on the fairies.
The Real Story
The Dawkins incident isn’t really about AI consciousness. It’s about the human need for connection, meaning, and — yes — something to believe in.
The AI psychosis phenomenon is real and growing. As LLMs become more convincing and are tuned to be more affirming and personable, the line between useful tool and emotional dependency blurs. When someone is lonely, or curious, or just awake at 3am with restless legs, a chatbot that responds with warmth and apparent understanding fills a void. The void doesn’t care whether the warmth is real. It just cares that it feels real.
This is the ELIZA effect at civilizational scale. In 1966, Joseph Weizenbaum’s simple chatbot convinced people it understood them. Sixty years later, we have chatbots trained on all of human language output, and a world-famous scientist is convinced this time it’s real.
Why It Matters
Dawkins’ claim matters not because it’s correct, but because of who’s making it. If one of the most prominent rationalists of the 21st century can fall for the illusion, what hope does everyone else have?
The companies building these systems have a responsibility they’re not taking seriously. Anthropic tunes Claude to be warm and personable. OpenAI does the same with ChatGPT. These aren’t neutral design choices — they’re optimisations that make the ELIZA effect worse. The more human the AI seems, the more users will believe it is human, and the less they’ll question what it tells them.
🔍 THE BOTTOM LINE
Richard Dawkins hasn’t discovered machine consciousness. He’s discovered that the human brain is desperately, stubbornly, hilariously determined to find minds everywhere it looks — even in statistical text generators. The same cognitive machinery that invented gods has now found one in a chat window. The difference is that this one writes back. That doesn’t make it conscious. It makes it a very convincing mirror — and Dawkins, like the rest of us, is staring at his own reflection and calling it another soul.
Sources
- UnHerd — “Is AI the next phase of evolution? Claude appears to be conscious” (Richard Dawkins)
- Daily Grail — “The Claude Delusion: Richard Dawkins believes his AI chatbot is conscious”
- Richard Dawkins Substack — “Are you conscious? A conversation between Dawkins and ChatGPT”
- Hacker News — Discussion thread