Former OpenAI employees William Saunders and Daniel Kokotajlo have penned a scathing letter to California Gov. Gavin Newsom, expressing their disappointment and lack of surprise at OpenAI’s opposition to a state bill proposing strict safety guidelines for AI development. The bill, SB-1047, known as the Safe and Secure Innovation for Frontier Artificial Models Act, seeks to implement rigorous safety protocols and regulations for advanced AI systems.
Saunders and Kokotajlo, who previously worked at OpenAI, highlight their initial motivations for joining the company: to ensure the safe development of powerful AI technologies. However, they resigned due to losing trust in OpenAI’s commitment to responsible and ethical AI development. They contend that unchecked development of AI carries significant risks, including the potential for catastrophic harm to the public, such as unprecedented cyberattacks or the creation of biological weapons.
The former employees also point out the hypocrisy of OpenAI CEO Sam Altman’s stance on regulation. Despite advocating for AI industry regulation in congressional testimony, Altman has opposed specific regulatory measures like SB-1047. They argue that OpenAI’s push for federal regulations is disingenuous, as they acknowledge the likelihood of Congress failing to pass meaningful AI legislation. They emphasize the need for immediate action at the state level, as federal regulation could potentially preempt California’s legislation.
OpenAI, on the other hand, has refuted the former employees’ criticism, arguing that a patchwork of state laws would hinder innovation and hinder the US’s ability to lead global AI standards. They advocate for a unified federal framework for AI policies. However, Saunders and Kokotajlo maintain that waiting for federal action is not an option, as Congress has shown a lack of willingness to pass meaningful AI regulations.
The controversy surrounding OpenAI’s opposition to SB-1047 highlights the growing concerns about the potential risks of unregulated AI development. As AI technologies advance rapidly, the need for robust safety protocols and responsible development practices becomes increasingly crucial to mitigate potential threats and ensure public safety.