California Governor Gavin Newsom, a Democrat, has dealt a significant blow to the burgeoning movement to regulate artificial intelligence, vetoing a bill that would have established safety measures for large AI models. This move marks a major setback for those seeking to implement guardrails around AI as it rapidly evolves with limited oversight.
The legislation, which would have been the first of its kind in the nation, faced staunch opposition from tech giants, startups, and several Democratic lawmakers. The governor, who previously championed the need for AI regulation at the state level, argued that the bill, S.B. 1047, could have stifled the tech industry by imposing overly stringent requirements.
Newsom’s veto comes amidst growing concerns about the potential for AI to be misused for harmful purposes. The bill aimed to address this by requiring companies to test their AI models, disclose safety protocols, and prevent manipulation for activities like disrupting the state’s power grid or creating chemical weapons. The bill also included whistleblower protections for industry workers.
While acknowledging the bill’s good intentions, Newsom argued that it applied overly broad standards, encompassing even basic AI applications. He expressed his belief that the bill would not effectively address the genuine threats posed by AI.
Instead of enacting S.B. 1047, Newsom announced that California would partner with industry experts to develop safety measures for powerful AI models. This approach signals a shift towards a more collaborative strategy, with the state seeking to work with industry leaders rather than imposing strict regulations.
Supporters of the bill, including Democratic state Senator Scott Wiener, expressed disappointment, characterizing the veto as a setback for efforts to ensure public safety and welfare in the face of rapidly advancing AI technology. They argued that voluntary commitments from industry alone are insufficient to address the growing risks associated with AI, particularly as companies themselves acknowledge the potential for harm.
While the bill’s fate has been decided, the debate around AI safety continues. Newsom’s decision highlights the complex challenges of regulating a technology that is rapidly evolving and possesses immense potential for both good and harm. California, long considered a leader in tech innovation, now faces the task of finding a balance between fostering innovation and safeguarding the public from the potential risks of powerful AI.