Anthropic and OpenAI are at odds over proposed Illinois legislation that would fundamentally reshape how AI companies face legal accountability for catastrophic harms. The disagreement signals deepening divisions between the two leading US AI labs as they ramp up lobbying across multiple states.
The contested bill, SB 3444, would shield AI labs from liability if their systems are used to cause mass casualties or property damage exceeding $1 billion, provided the developer published a safety framework online. OpenAI has backed the measure. Anthropic has come out against it.
According to people familiar with the matter, Anthropic has been lobbying Illinois state senator Bill Cunningham, who sponsored SB 3444, and other state lawmakers to either make major changes to the bill or reject it outright. Cesar Fernandez, Anthropic's head of US state and local government relations, stated in a message to WIRED: "We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability."
The core dispute centers on who bears responsibility in an AI-enabled disaster scenario. Under SB 3444, an AI lab would not be liable if a bad actor used its model to create a bioweapon causing mass casualties, so long as the lab had drafted and published its own safety framework.
OpenAI has framed the bill as a necessary measure to reduce risk from frontier AI systems while enabling broad access to the technology. In a statement, OpenAI spokesperson Liz Bourgeois said: "In the absence of federal action, we will continue to work with states to work toward a consistent safety framework. We hope these state laws will inform a national framework that will help ensure the US continues to lead."
Anthropic takes the opposing view: companies developing frontier AI models should bear at least partial responsibility if their technology causes widespread societal harm. Thomas Woodside, cofounder and senior policy adviser at the Secure AI Project, told WIRED that existing liability law already provides "a powerful incentive for AI companies to take reasonable steps to prevent foreseeable risks." Removing that liability, he said, would be "a bad idea" because it weakens "the most significant form of legal accountability for AI companies that's already in place."
Anthropic is simultaneously backing an alternative measure, SB 3261, which it characterizes as one of the nation's strongest AI safety laws. That bill would require frontier AI developers to create public safety and child protection plans and have them tested by independent third-party auditors to assess effectiveness. Anthropic testified in favor of SB 3261 last week.
Illinois governor JB Pritzker's office issued a statement opposing the liability shield: "Governor Pritzker does not believe big tech companies should ever be given a full shield that evades responsibilities they should have to protect the public interest."
While AI policy experts assess the bill's chances of passage as remote, the conflict reveals a fault line between Anthropic and OpenAI that could widen as both companies intensify their lobbying presence across state legislatures. Anthropic, founded five years ago by former OpenAI employees, has built a public profile centered on articulating potential risks from advanced AI and advocating for protective safeguards—a stance that has drawn criticism from the Trump administration, which seeks to limit state-level AI regulations it views as development obstacles.