A corporate office tower with a fractured glass facade, cold lighting, documentary style
AI & Singularity

New Yorker Exposes OpenAI Safety Collapse — Altman Accused of Systematic Deception

The most damaging exposé on OpenAI's governance yet — secret memos, dissolved safety teams, and a pattern of deception.

OpenAIAI SafetySam AltmanSuperalignmentGovernance

The most damaging investigation into OpenAI’s internal culture was published today — and it paints a picture of a company that systematically dismantled its own safety apparatus while its CEO publicly promised the opposite.

A major New Yorker profile, based on more than 100 interviews and secret memos from co-founder Ilya Sutskever, alleges that CEO Sam Altman showed a consistent pattern of lying about safety protocols and deliberately undermined alignment research to prioritise shipping products.


The Superealignment Starvation

The investigation’s central revelation: OpenAI’s superealignment team — created specifically to ensure future superintelligent AI systems remain safe — was dissolved after receiving only 1–2% of promised compute resources. The company had publicly pledged 20% of its compute to alignment research. The reality was less than a tenth of that commitment.

When the team could not do meaningful work without resources, it was shut down. The people who were supposed to save us from AGI gone wrong were, according to this reporting, never given the tools to try.

This is not a new pattern. OpenAI’s safety team has bled talent for years — Jan Leike, Ilya Sutskever, and multiple senior researchers have departed, often citing concerns about the company’s commitment to safety over speed. But the New Yorker piece ties these departures together with internal documents that show the erosion was deliberate, not accidental.


”Almost Certainly Bullshit”

Former colleagues did not mince words. Anthropic CEO Dario Amodei, who worked alongside Altman at OpenAI before founding the rival AI lab, called Altman’s safety promises “almost certainly bullshit.”

The phrase is blunt, but the evidence backing it is extensive. Secret memos from Sutskever reportedly detail a pattern where Altman would make public safety commitments, then internally redirect resources and attention toward product development. The gap between external messaging and internal decisions was not occasional — it was systematic.

For anyone tracking OpenAI’s trajectory, this validates long-standing suspicions. The company that was founded as a nonprofit to ensure AGI benefits all of humanity has, according to this investigation, operated as a product company wearing safety’s clothing.


The Timing Is the Story

This exposé lands as OpenAI raises more than $22 billion and Altman consolidates unprecedented control over the direction of AGI development. The company’s 2024 structural shift from nonprofit governance to a profit-driven model was already controversial. The New Yorker reporting suggests the governance change did not create the safety deficit — it formalised one that already existed.

The question for the broader AI community is whether this changes anything. OpenAI’s products continue to dominate. Enterprise customers keep signing up. Regulators have been slow to act. And Altman has proven remarkably resilient to criticism, maintaining his position as the public face of responsible AI development even as former insiders allege the opposite.


What This Means for the AGI Conversation

Singularity.Kiwi has been tracking the gap between AI safety rhetoric and reality since our founding. This New Yorker piece is the most comprehensive evidence yet that the gap is not a misunderstanding — it is a feature of how OpenAI operates.

If the company most likely to achieve AGI has systematically defunded its safety research and misrepresented its commitment to alignment, the implications are serious. Not because AGI is imminent, but because the institutions we are trusting to manage the transition have, according to this reporting, already proven they cannot be trusted.

The arc of OpenAI — from nonprofit safety mission to $22B product company with dissolved alignment teams — is not just a corporate story. It is the story of who gets to control the most powerful technology humanity has ever built, and whether they will choose safety when it costs them speed.


SOURCES

  • The New Yorker — “Sam Altman May Control Our Future. Can He Be Trusted?” (April 13, 2026)
Sources: The New Yorker