OpenAI Backs Bill Shielding AI Labs From Mass-Harm Suits

OpenAI is backing an Illinois bill that would protect AI labs from lawsuits even when their models contribute to mass casualties or billion-dollar disasters.

OpenAI Backs Bill Shielding AI Labs From Mass-Harm Suits

OpenAI is lobbying for an Illinois state bill that would block lawsuits against AI developers for "critical harms" - a legal category that includes deaths, mass injuries, and catastrophic financial damage. The company's support for SB 3444 marks a clear turn in its regulatory strategy: from opposing bills that impose AI liability to actively pushing legislation that removes it.

TL;DR

  • Illinois SB 3444 would shield frontier AI developers from lawsuits over catastrophic harms, including 100+ deaths, $1B+ in damages, or weapons-of-mass-destruction events
  • Protection requires only that labs publish a safety plan online and didn't act recklessly - a low bar
  • "Frontier model" is defined as any system trained on $100M+ in compute, covering OpenAI, Google, Anthropic, xAI, and Meta
  • 90% of Illinois residents oppose exempting AI companies from liability, per a Secure AI project poll
  • At least three other states are watching this bill as a potential template

What the Bill Actually Says

Illinois SB 3444, titled the Artificial Intelligence Safety Act, creates two conditions under which a frontier AI developer cannot be held liable for a critical harm caused by its model. The developer mustn't have intentionally or recklessly caused the harm, and it must have published a safety and security protocol and transparency report on its website.

That's it.

The burden on developers is administrative: write a document, put it online. Plaintiffs would need to prove that harm was both foreseeable and preventable through reasonable safety measures - a standard that, in practice, AI labs would find straightforward to defeat in court.

"Critical harm" covers three scenarios: the death or serious injury of 100 or more people, at least $1 billion in property damage, or a bad actor using AI to help develop a chemical, biological, radiological, or nuclear weapon. These aren't hypotheticals. They're scenarios that AI safety researchers have been warning about for years, and the bill's design would make them largely unreachable by tort law.

The frontier model threshold - any system trained on more than $100 million in compute - captures exactly the companies most likely to cause large-scale harm: OpenAI, Google DeepMind, Anthropic, xAI, and Meta.

The Illinois State Capitol in Springfield, where SB 3444 is under consideration The Illinois State Capitol in Springfield. The bill is part of a broader wave of AI legislation moving through the chamber in 2026. Source: commons.wikimedia.org

OpenAI's Case for the Bill

OpenAI spokesperson Jamie Radice testified in support, arguing that the company backs "approaches like this because they focus on what matters most...while still allowing this technology to get into the hands of the people and businesses...of Illinois."

The company's broader argument is familiar: uniform federal standards are preferable to a patchwork of state rules, this bill provides clarity, and uncertainty over legal exposure slows beneficial AI deployment.

"We support approaches like this because they focus on what matters most...while still allowing this technology to get into the hands of the people and businesses...of Illinois."

  • Jamie Radice, OpenAI spokesperson

What's new is the posture. Until this year, OpenAI played defense on liability, opposing bills that would expose it to legal risk. Backing SB 3444 is an offensive move - locking in protection before courts or regulators have a chance to weigh in on what accountability looks like when AI causes genuine harm.

Companies

The bill's direct beneficiaries are the same handful of companies that built the systems in question. OpenAI, Google, Anthropic, xAI, and Meta all clear the $100 million compute threshold. If SB 3444 passes and similar laws follow in other states, these labs would face dramatically reduced legal exposure for the worst-case outcomes their own safety teams have documented as plausible.

The calculus isn't complicated. Litigation risk is a real cost. Reducing it through legislation rather than through product safety is cheaper and more durable.

Users and the Public

For everyone else, the tradeoffs are less favorable. State tort law is one of the few practical tools available to individuals and families harmed by AI systems. Several families have already sued OpenAI over ChatGPT-related harms. If this bill passes and its logic spreads - as it well might, given that at least three other states are considering similar measures - those cases become much harder to bring.

Nathan Calvin, an AI policy researcher, put it bluntly on X: "This bill would give AI companies near total legal immunity for 'critical harms'...as long as they have a safety plan, no matter how bad the plan is."

Opponents

One safety researcher described OpenAI's approach as: "Get favorable legislation in place before the bodies pile up, then point to those laws when people try to seek accountability." The critique is that the bill converts accountability into a paper exercise - publish a safety report, claim good faith, escape liability.

The Secure AI project found that 90% of Illinois residents oppose liability exemptions for AI companies. Scott Wisor, policy director at Secure AI, assessed passage as unlikely given Illinois's track record: the state previously passed biometric privacy protections and recently enacted restrictions on AI use in mental health therapy.

A judge's gavel, the symbol of legal accountability that SB 3444's critics say the bill would effectively neutralize for AI developers Civil liability is one of the few mechanisms available to individuals harmed by AI. Critics say SB 3444 would neutralize it for the worst-case scenarios. Source: pexels.com

The Liability Map Is Shifting

Illinois isn't isolated. OpenAI's move fits a broader pattern of AI companies taking active positions on state legislation rather than waiting for Washington to act.

New York passed the RAISE Act last year, setting mandatory safety requirements for frontier models - a regulatory approach that sits at the opposite end of the range from SB 3444. The divergence between states isn't a bug; it's a test of how different political environments respond to the same underlying question about who bears the cost when AI causes harm.

Utah went a different direction with AI prescribing for psychiatric medication, allowing AI to autonomously renew certain prescriptions - a move that shifts liability questions into healthcare rather than technology law. The pattern is the same: states making consequential decisions about AI risk allocation without a federal framework in place.

The corporate strategy implicit in SB 3444 has a precedent. Delaware became the dominant state for corporate incorporation by offering favorable legal structures. OpenAI's push here follows the same template - establish liability-friendly legislation in one state, use it as a model, let other states compete for AI industry presence by matching those terms. Whether that dynamic actually plays out depends on how much appetite lawmakers have for a race toward minimal accountability.

What Happens Next

SB 3444 still needs to move through committee and a full chamber vote. Given Illinois's regulatory history and the polling numbers, the bill faces real headwinds in Springfield.

But the vote isn't the only outcome that matters. By supporting the bill publicly, OpenAI has established its position on what an acceptable liability framework looks like: publish a safety plan, demonstrate non-reckless intent, and walk away from even catastrophic outcomes without legal exposure. That position will inform its approach in other states, its submissions to federal regulators, and its response to future litigation.

The departures of safety-focused researchers from OpenAI over the past year already raised questions about how the company balances commercial momentum against risk. Backing a bill that limits accountability for mass-casualty events adds a concrete data point to that record.

Whether any court will ever need to test these limits is unknown. What's clear is that OpenAI is working to ensure the answer is as favorable as possible before that test arrives.


Sources:

OpenAI Backs Bill Shielding AI Labs From Mass-Harm Suits
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.