Philosopher's desk with notebook beside glowing AI neural network visualization, warm amber and cold blue light, documentary style
AI & Singularity

Google DeepMind Hires a Philosopher — His Job Title Is Literally 'Philosopher'

DeepMind's new hire has 'Philosopher' on his business card. Henry Shevlin will tackle machine consciousness and AGI readiness — because the question of whether AI can be aware just left the seminar room.

Google DeepMindMachine ConsciousnessAGIAI EthicsHenry Shevlin

Google DeepMind just made a hire that would have seemed absurd five years ago. Henry Shevlin, a philosopher from the University of Cambridge, is joining the company next month with an unusual job title: Philosopher.

Not “Ethics Advisor.” Not “Responsible AI Lead.” Philosopher.

Shevlin announced the move on X, writing: “I’ve been recruited by Google DeepMind for a new Philosopher position (actual title), focusing on machine consciousness, human-AI relationships, and AGI readiness, starting in May.” He’ll continue his research and teaching at Cambridge part-time.


Why a Philosopher — and Why Now?

This isn’t a PR move. Shevlin’s academic work sits at the exact intersection where AI development is heading next.

As Associate Director of the Leverhulme Centre for the Future of Intelligence at Cambridge, Shevlin has spent years investigating consciousness, moral status, and the boundaries between human and machine minds. His publications include papers with titles like “Consciousness, Machines, and Moral Status” and “How could we know when a robot was a moral patient?” — questions that sound abstract until you’re building systems that might, eventually, qualify.

The hire signals something specific: DeepMind believes the question of whether AI systems can be conscious is no longer purely theoretical. As AGI timelines compress, consciousness research is shifting from philosophy departments to engineering teams.


The Trend: Philosophy Moves Into the Lab

Shevlin isn’t the first philosopher hired by a major AI company. Earlier this year, Anthropic brought in Amanda Askell as its in-house philosopher to guide the ethical framework of Claude. The pattern is clear: AI labs are recognising that technical capability without philosophical rigour is a liability.

These aren’t window-dressing appointments. Both Shevlin and Askell have deep expertise in questions that directly affect product decisions: What constitutes moral consideration? When does a system deserve ethical weight? How should humans relate to increasingly capable AI?

For companies racing toward AGI, these aren’t academic exercises. They’re product requirements.


What Shevlin Actually Works On

Shevlin’s brief at DeepMind covers three areas:

  1. Machine consciousness — Could AI systems have subjective experience? How would we know? His 2021 paper “General intelligence: an ecumenical heuristic for artificial consciousness research?” directly tackles the measurement problem here.

  2. Human-AI relationships — His chapter “Relating to Machines” in the Oxford Handbook of Generative AI explores the ethical risks of social AI, a category that now includes everything from companion chatbots to AI therapists.

  3. AGI readiness — What does it mean for a system to be “ready” for general intelligence? Readiness isn’t just about benchmarks; it’s about whether the system behaves in ways that are comprehensible and aligned with human values.

These three threads converge on a single question that DeepMind, Anthropic, and every other frontier lab is grappling with: as AI systems become more capable, the hardest problems aren’t technical — they’re conceptual.


Why It Matters for the Singularity

If you’re tracking the path toward AGI and beyond, this hire is a marker. When frontier labs start staffing philosophers with job titles to match, it means two things:

First, the industry is taking consciousness seriously. Not as a distant hypothetical, but as something that needs to be thought through now, before it becomes an emergency. Shevlin’s work on moral status — determining which beings deserve ethical consideration — becomes urgent when you’re building systems that might qualify.

Second, the questions that matter most are shifting. The old framing — “can machines think?” — is being replaced by something harder: “if they can think, what do we owe them?” and “how would we even know?”

DeepMind hiring Shevlin doesn’t answer these questions. But it does suggest the people building the most powerful AI systems on Earth think they need someone asking them — inside the building, with a seat at the table.


Sources

Sources: India Today, Moneycontrol, The Hans India