AI Leaders Sign 'Pro-Human' Declaration — But Does It Matter?
A bipartisan coalition including Turing Award winner Yoshua Bengio, Berkeley professor Stuart Russell, and figures from Steve Bannon to Susan Rice has signed a declaration that AI should serve humanity, not replace it. The question is whether anyone building AI will listen.
In March 2026, the Future of Life Institute released the Pro-Human AI Declaration — a statement so broad in its signatories that it unites figures who normally wouldn't share a stage. Steve Bannon and Susan Rice. Glenn Beck and progressive activists. Jaron Lanier and corporate executives. The common thread: concern that AI development has taken a wrong turn.
The Declaration's Core Principles
- Humans, not AI, should make high-stakes decisions — AI advises, humans decide
- AI should augment human creativity, not replace it — Artists, writers, creators shouldn't be automated away
- AI should strengthen human relationships, not substitute for them — Care work, counseling, companionship should remain human
- The benefits of AI should be broadly shared — Not concentrated among tech elites
- AI development should be transparent and accountable — No black boxes controlling critical systems
Who Signed?
The signatory list reads like a who's-who of AI research, politics, and civil society:
Notable Signatories
That's the striking thing about this declaration — it's not a partisan document. It brings together people who disagree on nearly everything else, united by concern about AI's direction.
What the Declaration Actually Says
The core framing is a "fork in the road" metaphor:
"As companies race to develop and deploy AI systems, humanity faces a fork in the road. One path is a race to replace: humans replaced as creators, counselors, caregivers, and friends. The other is a race to empower: AI developed as a tool that amplifies human capabilities while keeping humans in control."
— Pro-Human AI Declaration, March 2026
The declaration argues that current AI development incentives favor the "race to replace" — automation is profitable, human augmentation less so. Without intervention, we get:
- Job displacement rather than job augmentation
- Algorithmic control over human agency
- Concentration of power in a few tech companies
- Loss of human connection in care, education, and relationships
The Response from Tech
Major AI companies have been notably quiet about the declaration. OpenAI, Anthropic, Google, and Meta have not signed. The companies building the most advanced AI systems are, perhaps unsurprisingly, not rushing to endorse a document that questions their approach.
Some signatories acknowledged the tension. "We're not saying stop AI development," said one signer. "We're saying there's a choice about how to develop it — and the current path isn't inevitable."
Is This Just Another AI Pledge?
You'd be forgiven for skepticism. AI companies have signed dozens of safety pledges, voluntary commitments, and ethical frameworks. Most have been followed by... more of the same behavior. The Pro-Human Declaration has no enforcement mechanism. It's a statement of values, not a binding contract.
But there are differences:
- It's not industry-led. This isn't tech companies promising to behave. It's external voices — researchers, advocates, politicians — drawing a line.
- It's bipartisan. In a polarized political environment, getting Bannon and Rice on the same document is genuinely unusual.
- It's specific about tradeoffs. Rather than vague "AI for good" language, it explicitly names the choice: replace vs. empower.
This declaration is significant not for what it will stop — it won't slow AI development — but for what it creates: a framework for people who disagree with the current direction to organize around.
The companies building AGI aren't going to read this and say "oh, we should be more careful." The profit incentives haven't changed. The race is still on.
But something else is happening. The public is starting to see the tradeoffs. Workers are noticing that "AI augmentation" often means "AI takes the interesting parts of your job." Artists are seeing their work used to train systems that might replace them. Teachers are watching AI companies pitch chatbots as replacements for human instruction.
The Pro-Human Declaration gives language to that resistance. "Replace vs. empower" is a useful frame. It's not anti-AI. It's anti-replacement.
Will it matter? Probably not to OpenAI's roadmap. But it might matter to the people asking whether the future has a place for them in it. And that conversation is only beginning.
What Happens Next?
- Political action: Some signatories hope to translate declaration principles into legislation
- Public awareness: The "replace vs. empower" frame is designed for public discourse
- Corporate pressure: Shareholder activism and employee organizing around AI's impact
- Alternative development: Supporting AI projects that align with pro-human principles
Sources
- Pro-Human AI Declaration — Official Site
- Verity — Future of Life Institute Releases Pro-Human AI Declaration
- TechCrunch — A roadmap for AI, if anyone will listen
- The Verge — Inside the secret meeting that led to the AI political resistance
This article reflects our analysis and opinion based on publicly available information at the time of publication. The AI landscape evolves rapidly. Verify important claims independently. Views expressed are those of Singularity.Kiwi editors.