On August 28, Senate Bill (SB) 1047 passed the State Assembly and now awaits Governor Gavin Newsom’s decision. He has until September 30 to either sign it into law or veto it.
Authored by Democratic Senator Scott Wiener from San Francisco, SB 1047 aims to establish regulations before AI advancements become uncontrollable. The bill mandates safety testing for the most advanced AI models.
Developers in California and companies like OpenAI, Meta, and Google, would need to outline “kill switches” for models that become unmanageable. The bill also requires third-party safety audits and allows the state attorney general to sue developers if their models pose an ongoing threat, such as an AI grid takeover.
Tech companies, particularly OpenAI, have strongly opposed the bill, arguing it would hinder growth. Google and Meta also expressed concerns to Governor Newsom in a letter.
Despite this, Amazon’s Anthropic has supported the bill, stating that its benefits could outweigh the costs. Eric Daimler, a former Carnegie Mellon professor and Obama Administration alumnus, sees SB 1047 as a potential model for federal AI legislation but believes there may be a more effective approach. He has voiced concerns about AI safety, emphasizing the need for thoughtful regulation.
See Related: zkLink Announces First “Dunkirk Test” to Establish New DeFi Safety Standard
Other AI Bills In California
SB 1047 isn’t the only AI-related legislation in California. SB 1220, which would ban AI use in welfare and health services call centers, has also sparked debate. In contrast, AB 3211, requiring watermarks on AI-generated content, has garnered support from companies like OpenAI and Microsoft. Billionaire Elon Musk, who is developing his own AI model, Grok, has favored comprehensive AI safety regulations.
Cointelegraph has reached out to California legal experts and AI developers to better understand the implications of these bills.