OpenAI, the company behind ChatGPT, is actively lobbying for a bill in Illinois that would significantly reduce its legal liability for damages caused by advanced AI systems. The proposed legislation, SB 3444, aims to shield developers of 'breakthrough' AI models from responsibility for catastrophic harm, including loss of life or damages totaling billions of dollars. This move represents a proactive effort by AI pioneers to construct a legislative 'shield' against the real-world consequences of their potentially hazardous creations.
The bill would allow developers of 'breakthrough' AI models to avoid legal repercussions even if their technologies cause 'critical harms.' The only conditions for exemption would be the absence of direct intent or gross negligence on the part of the developer, and the formal publication of safety reports. A 'breakthrough model' is defined as a system trained using computational resources exceeding $100 million. OpenAI, Google, xAI, Anthropic, and Meta are among the likely beneficiaries of such a law. Under the proposed legislation, if an AI model were to commit acts that would constitute a criminal offense for a human and lead to catastrophic consequences, the developer would essentially remain unpunished, provided they formally met the reporting requirements.
Jamie Radice from OpenAI claims the company's support for such measures is aimed at reducing the risks of serious harm from advanced systems while accelerating the deployment of AI technology to users and businesses. Caitlin Niedermeyer from OpenAI's global affairs team echoed these sentiments, stating that individual states attempting to regulate AI create unnecessary friction that 'does not improve safety.' Both OpenAI representatives have actively testified in support of SB 3444. Their arguments suggest an attempt to convince regulators that instead of focusing on preventing harm, the priority should be on equipping society with tools to manage AI, without hindering its rapid development.
This initiative by OpenAI signals a reluctance among AI giants to accept full accountability for potential disasters stemming from their technologies. If enacted, this Illinois law could establish a dangerous industry precedent, diminishing incentives for developing truly safe AI systems and increasing risks for businesses and society. Essentially, it would legitimize an AI arms race with minimal consequences for those driving it. Such legislation would shift the burden of risk insurance onto the public, while granting companies broad operational freedom. For you as a CEO, this means you must prepare for a landscape where leading AI providers may not be willing to share responsibility for malfunctions or unpredictable behavior in the systems you implement.