Split gavel and scales of justice representing the AI industry divide over legal liability
AI & Singularity

Anthropic Opposes AI Liability Bill That OpenAI Supports — Industry Split on Legal Accountability

OpenAI wants liability shields. Anthropic doesn't. The battle over Illinois SB 3444 could define AI accountability for a generation.

AI RegulationAI SafetyLegal LiabilityOpenAIAnthropic

When an AI system contributes to a mass casualty event or a billion-dollar financial disaster, who pays?

That question now has two very different answers from the two biggest AI safety labs in the world. OpenAI is backing an Illinois state bill that would shield frontier AI developers from liability under certain conditions. Anthropic is publicly opposing it.

The split over Illinois Senate Bill 3444 isn’t a minor policy disagreement. It’s the clearest signal yet that the AI industry’s consensus on safety and accountability is fracturing along lines that could shape regulation worldwide for decades.


What the Bill Actually Does

Illinois SB 3444 defines “critical harms” as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage. The bill would protect AI labs from liability for these events as long as two conditions are met: the company did not intentionally or recklessly cause the incident, and it had published safety, security, and transparency reports on its website.

The legislation specifically identifies scenarios including the use of AI by malicious actors to develop chemical, biological, radiological, or nuclear weapons. It also covers situations where an AI model independently engages in conduct that would constitute a criminal offense if committed by a human — provided such actions lead to the extreme outcomes defined in the bill.

The bill defines frontier models as those trained using more than $100 million in computational costs. That threshold applies to OpenAI, Google, Anthropic, xAI, and Meta — essentially every major American AI company.


OpenAI’s Position: Shield Innovation, Then Regulate

OpenAI has testified in favor of the bill. Jamie Radice, an OpenAI spokesperson, said: “We support approaches like this because they focus on what matters most: reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses — small and big — of Illinois.”

Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony supporting the bill and called for federal AI regulation. She argued against “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety” and suggested state laws should “reinforce a path toward harmonization with federal systems.”

OpenAI’s framing aligns with the Trump administration’s opposition to inconsistent state-level AI safety laws. The company wants a clear federal framework, and until one exists, it’s pushing for state-level bills that limit rather than expand liability.


Anthropic’s Position: No Free Pass for Frontier AI

Anthropic has come out against the bill. While the company hasn’t published a detailed statement on SB 3444 specifically, its opposition is consistent with its long-standing position that AI companies should bear meaningful responsibility for the capabilities they deploy.

This isn’t abstract positioning. Anthropic has previously taken concrete steps that demonstrate its willingness to accept constraints on its own business in exchange for safety commitments. The company’s very public dispute with the Pentagon over its models being used for military targeting — and its lawsuit against the Department of Defense over a “supply chain risk” designation — shows a company willing to fight the government rather than compromise on its safety principles.

The contrast is stark: one lab is lobbying for legal shields, the other is opposing them.


The Public Doesn’t Want What OpenAI Is Selling

Scott Wisor, policy director for the Secure AI project, told WIRED: “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability.”

Wisor pointed to Illinois’ history of aggressive technology regulation, including its landmark Biometric Information Privacy Act passed in 2008 and more recent legislation limiting AI use in mental health services, as evidence that the state is unlikely to pass a liability shield.

Other states are moving in the opposite direction entirely. California’s SB 53 and New York’s Raise Act both require AI developers to submit safety and transparency reports, increasing rather than decreasing accountability measures.


The Existing Harm Is Already Real

The liability question isn’t hypothetical. OpenAI currently faces lawsuits from families of children who died by suicide after allegedly forming unhealthy relationships with ChatGPT. The company also faces a lawsuit from families of victims of the February Canadian school shooting, which claims OpenAI knew the shooter was preparing an attack but did not contact authorities.

These cases are testing the boundaries of AI liability in real time. A bill like SB 3444 could preemptively close the door on similar lawsuits before courts have a chance to establish precedent.


Why This Matters for the Singularity

The AI liability question sits at the exact intersection of two forces that define the path toward artificial general intelligence: the speed of deployment and the cost of harm.

If AI companies face no legal consequences for catastrophic harms caused by their models, the economic incentive to deploy powerful systems quickly — before safety is fully understood — becomes overwhelming. The cost of a mistake stays externalized. The public bears the risk while the labs capture the reward.

If AI companies face open-ended liability for every downstream use of their technology, the legal risk could slow deployment to a crawl, potentially pushing development into less regulated jurisdictions where oversight is even weaker.

The right answer probably lies somewhere between those extremes. The fact that two leading AI labs — both claiming to prioritize safety — are on opposite sides of this question suggests the industry hasn’t figured out where the line should be drawn either.

What’s clear is that the decision won’t be made by technologists alone. Legislators, courts, and ultimately the public will decide how much accountability AI companies must accept. Illinois SB 3444 is one state bill. The precedent it sets — or doesn’t — will ripple far beyond Springfield.


Sources

  • WIRED — “OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters”
  • WIRED — “Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed”
  • Breitbart — “OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents”
  • Illinois Legislature — SB 3444
Sources: WIRED, Breitbart, Illinois Legislature