A Molotov cocktail thrown at Sam Altman’s San Francisco home. Gunfire at an Indianapolis city councilman’s house with a note reading “NO DATA CENTERS.” Threats against OpenAI’s offices prompting a lockdown. A pattern of violent incidents against AI infrastructure and the people behind it is emerging worldwide — and the targets are shifting from machines to humans.
🔥 THE INCIDENTS
Sam Altman’s home — April 2026. Daniel Alejandro Moreno-Gama, 20, allegedly threw a Molotov cocktail at Altman’s residence in San Francisco’s Russian Hill neighborhood. The device hit an exterior gate. Altman and his family were asleep inside but unharmed. Moreno-Gama is in custody.
Indianapolis councilman’s home — April 2026. The house of Ron Gibson, a city councilman who supports a datacenter project in the Martindale-Brightwood neighborhood, was shot at 13 times. The bullet holes are still there. The shooter left a message on his doorstep: “NO DATA CENTERS.” Gibson and his son were unharmed.
OpenAI offices — November 2025. A 27-year-old anti-AI activist threatened to murder people at OpenAI’s San Francisco offices, prompting a lockdown. He had expressed a desire to buy weapons.
Iran’s threat — March 2026. Iran’s Revolutionary Guard released satellite footage of OpenAI’s Stargate campus in Abu Dhabi and promised its “complete and utter annihilation.” A nation-state threatening to destroy an AI facility.
These are isolated incidents. For now. But they follow a historical pattern that should make everyone pause.
📜 HISTORY RHYMES
April 1812. A mill owner named William Horsfall was riding home from the Cloth Hall market in Huddersfield, UK. He had spent weeks boasting that he would ride up to his saddle in Luddite blood. At Crosland Moor, George Mellor — 22 years old — shot him. Horsfall died the next day. Mellor was hanged.
The Luddite movement wasn’t mindless destruction. The workers had genuine grievances — wages cut, jobs replaced by machines, communities destroyed. But when their protests went unheard and their livelihoods disappeared, violence became the outlet. Not because it was right, but because people who feel they have nothing to lose are dangerous.
April 2026. A datacenter CEO is attacked at home. A local politician supporting a datacenter is shot at. The technology has changed — from looms to datacenters — but the human dynamic is identical.
🎯 FROM MACHINES TO PEOPLE
There’s a critical difference between the Luddite era and now: you can’t break a datacenter the way you can break a loom.
A loom is wood and string held together by tension. A datacenter is concrete and steel with biometric locks, electrified fences, armed guards, and redundancy upon redundancy. Every component is duplicated so no single failure brings the whole thing down. The algorithm inside isn’t on any single rack — it’s a digital pattern distributed across millions of chips, mirrored across continents.
As AI infrastructure becomes more fortified and physically inaccessible, frustrated individuals are turning to the weaker link in the chain: people.
As Alberto Romero of The Algorithmic Bridge observes: “Increasingly, as the objects of people’s anger and frustration and desperation become unreachable behind fences and guards, or abstracted away in ones and zeros, or elevated above the clouds, the mob will turn their unassailable emotions toward human targets.”
🗣️ THE SELF-DEFEATING RHETORIC
There’s a deeply uncomfortable irony here. The AI industry’s own rhetoric is feeding the backlash.
Every time Dario Amodei or Sam Altman tells the public they could lose their jobs, it doesn’t inspire resilience — it inspires rage. The message “AI will transform everything and you need to adapt” sounds very different to someone whose skills are being devalued than it does to a CEO whose net worth just doubled.
The Algorithmic Bridge frames it precisely: “The most serious mistake the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition, was making it explicit by doing constant discourses that amount to: ‘we are creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition.’”
AI has become the perfect scapegoat. People hate it so much they attribute everything going wrong in their lives to it, regardless of the truth. Real arguments like data theft get mixed with misinformation. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
And the industry keeps throwing rocks at its own roof — if AI is so powerful and dangerous and soon to be so ubiquitous, what is so unexpected about people blaming everything on it?
🇳🇿 THE NEW ZEALAND CONNECTION
New Zealand is not immune to this dynamic. The debate over AI factories and datacenter construction in NZ — particularly the Southland AI datacenter proposal — has generated significant community opposition.
So far, NZ’s pushback has stayed in the realm of policy debate and environmental concern. But the global escalation pattern suggests that if communities feel unheard — if they believe datacenters are being imposed on them without consent, without benefit sharing, without transition support — the temperature rises.
The key escalation scenario, as The Algorithmic Bridge identifies, is deceptively simple: if people feel they have no place in the future. If they feel expelled from the system — unable to earn, their skills obsolete, their communities transformed without their input — then they feel they have nothing to lose. And then the calculation changes.
New Zealand has an opportunity to model a different path: genuine community engagement, benefit sharing, transition support, and regulation that gives people a voice. The alternative is watching the same pattern play out here that’s already escalating overseas.
🔍 THE BOTTOM LINE
Violence against AI infrastructure and the people behind it is escalating globally. From nation-state threats to lone-wolf attacks, the targets are shifting from machines to humans as AI infrastructure becomes physically impenetrable.
This must be condemned. Nothing justifies violence against people. Full stop. But understanding why it’s happening is not the same as excusing it.
The historical parallel is clear. The Luddite movement turned violent when workers felt they had no alternative. The same dynamic is emerging today — when people feel powerless against unstoppable technological change and unheard by the institutions meant to represent them, some will turn to violence.
The AI industry’s own rhetoric is making it worse. Constantly telling people their jobs will disappear while selling $20/month subscriptions to the technology replacing them is a recipe for resentment. The industry needs more self-awareness and less self-consciousness about the disruption it’s causing.
For New Zealand: this is a warning, not a prediction. The datacenter debate here has stayed civil. It needs to stay that way — and the best way to ensure that is genuine community engagement, benefit sharing, and transition support. People who feel they have a stake in the future don’t attack it.
Nothing good will come of this. The violence is wrong. The grievances are real. Both things can be true at the same time, and both need to be addressed.
SOURCES
- The Algorithmic Bridge — AI Will Be Met With Violence (April 2026)
- San Francisco Police Department — Incident Report (April 2026)
- Indianapolis Metropolitan Police — Incident Report (April 2026)
- Tom’s Hardware — Iran Threatens OpenAI Stargate Campus (March 2026)