OpenAI just published a new set of principles, and if you were hoping for a reaffirmation of the founding mission — ensuring AGI benefits all of humanity — you’re going to want to sit down.
The new framework, posted April 26 by Sam Altman, lays out five principles: Democratization, Empowerment, Universal Prosperity, Resilience, and Adaptability. Sounds lovely, right? Like a wellness brand discovered AI policy. But read between the lines and the shift is stark: AGI mentions have dropped from a dozen in the 2018 Charter to just two. The pledge to stop competing if a safety-focused lab beats them to AGI? Gone. The explicit commitment to avoid capabilities that could risk humanity? Replaced with a vague promise to “err on the side of caution in the face of uncertainty.”
This isn’t a refinement. It’s a rebrand.
What Changed, Specifically
The 2018 OpenAI Charter was unmistakable about AGI. It mentioned the term twelve times. It explicitly committed to:
- Stopping competing if another lab looked likely to reach AGI first with a safety-focused approach
- Avoiding value-lock-in and power concentration
- Putting safety above the race
The new framework? AGI gets two mentions — one in the mission statement at the top, and one buried in the details. The competitive pledge has vanished entirely. Instead, OpenAI now promises to “resist the potential of this technology to consolidate power in the hands of the few” through collaboration with governments and other labs.
Translation: we’ll talk about resisting concentration while spending $122 billion on compute infrastructure and vertically integrating to lock in our market position. Bold strategy.
The Five Principles, Translated
Let me save you some time:
- Democratization — “We’ll give everyone access” (to our paid products)
- Empowerment — “AI helps people do stuff” (groundbreaking insight)
- Universal Prosperity — “We’re buying absurd amounts of compute while revenue is small” (their words, not mine)
- Resilience — “We’ll work with governments on biosecurity and cybersecurity” (after we build the models that create those risks)
- Adaptability — “We might change our mind later” (the only honest principle here)
The framework reads like it was written to survive a congressional hearing, not to guide the development of superintelligence.
Why This Matters
Here’s the thing: OpenAI is arguably the most influential AI lab on the planet. When they shift from “AGI safety is our core mission” to “iterative deployment and market expansion are our priorities,” that’s not just a branding exercise. It’s a signal.
And the signal is: the race is on, and safety is now a supporting character.
Remember when Altman was sending mixed messages about AGI timelines? This is the other shoe dropping. The 2018 Charter said OpenAI would step aside for a safer competitor. The 2026 framework says OpenAI will collaborate with governments to resist power concentration — while being the most concentrated AI power on Earth.
It’s the kind of rhetorical gymnastics that would impress a Cirque du Soleil recruiter.
The NZ Angle
New Zealand doesn’t have its own OpenAI-scale lab, but we do have a government trying to figure out AI regulation in a world where the biggest player just gutted its safety commitments. The EU AI Act and other frameworks assumed there’d be some restraint from the labs themselves. If OpenAI is now openly prioritizing deployment speed over safety, regulators everywhere — including Wellington — need to stop waiting for industry self-regulation and start writing actual rules.
The Real Question
Is AGI being deprioritized because it’s closer than anyone’s admitting — and OpenAI doesn’t want binding safety commitments when they’re about to cross the finish line? Or is it further away than the hype suggests, and the company is pivoting to the more profitable business of selling iterative AI products?
Either answer is uncomfortable. If AGI is close, we’ve just lost the single most important safety pledge in the industry. If it’s far, OpenAI just admitted the most valuable company in the AI space is essentially a SaaS business with better PR.
Neither version makes me sleep easier.
🔍 THE BOTTOM LINE
OpenAI didn’t just update its principles. It removed the ones that might actually constrain it. The company that once promised to step aside for a safer competitor now promises to… collaborate with governments. The same governments it’s outspending by orders of magnitude. If you were waiting for AI labs to police themselves, the wait is officially over — and the answer is no.