In a move to protect the integrity of elections, California Governor Gavin Newsom has signed three bills into law that aim to curb the use of artificial intelligence (AI) in creating misleading images or videos for political advertising. These bills specifically target deepfakes, which are AI-generated images or videos that can be used to manipulate or deceive viewers.
The new laws will immediately make it illegal to create and distribute deepfakes related to elections 120 days before Election Day and 60 days after. Courts will have the power to halt the distribution of such materials and impose civil penalties on those who violate the law. Governor Newsom emphasized the importance of safeguarding elections, stating, “Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation — especially in today’s fraught political climate.”
Large social media platforms will also be held accountable under these new laws. Companies like Elon Musk’s X (formerly Twitter), Meta’s Facebook and Instagram, and ByteDance-owned TikTok will be required to remove deceptive content related to elections. Political campaigns will also be required to publicly disclose if they are running ads that contain materials altered by AI.
These new laws were signed by Governor Newsom during a conversation with Salesforce CEO Marc Benioff at an event hosted by the major software company during its annual conference in San Francisco. Notably, the enactment of these laws coincides with members of Congress unveiling federal legislation aiming to stop election deepfakes. This federal bill would grant the Federal Election Commission the authority to regulate the use of AI in elections.
The misuse of AI in creating deepfakes has become a growing concern. A study conducted by Google’s DeepMind revealed that deepfakes of politicians and celebrities were more prevalent than AI-assisted cyber attacks. This year, AI image creation tools from ChatGPT-parent OpenAI and Microsoft were reported to be fueling election misinformation scandals. In January 2024, deepfake attacks on public figures, including Taylor Swift and President Joe Biden, raised alarm bells at the White House. The U.K. also received warnings about AI misinformation targeting its 2024 polls.
The passage of these laws in California signifies a proactive approach to addressing the potential for AI-generated misinformation to influence elections. As AI technology continues to advance, it is crucial for policymakers to stay ahead of the curve and implement measures to protect the integrity of democratic processes.