New Hampshire Robocall Prompts Lawmakers to Propose Deepfake Legislation
After an AI-generated robocall impersonating President Biden in New Hampshire, state officials are investigating and proposing legislation to address the issue of deepfakes in political campaigns.
The robocall, which urged recipients to skip the state’s primary in January, has prompted New Hampshire lawmakers to back legislation that would prohibit deepfakes within 90 days of an election unless they’re accompanied by a disclosure stating that AI was used.
New Hampshire is one of at least 39 states considering measures that would add transparency to AI-generated deepfake ads or calls as political campaigns intensify ahead of the November presidential election.
Other states’ efforts are largely focused on identifying content produced using AI as opposed to controlling that content or prohibiting its distribution.
In Wisconsin, a measure was signed into law that requires political ads and messages produced using synthetic audio and video or made using AI tools to carry a disclaimer. Failure to comply results in a $1,000 fine for each violation.
The Florida legislature, meanwhile, passed legislation with a bit more teeth: failure to disclose the use of AI-enabled messages would result in a criminal misdemeanor punishable by up to a year in prison.
Arizona is also considering similar measures requiring disclaimers in 90-day period before an election, in which repeated failures could result in a felony charge.
Unlike the states, Congress seems explicitly interested in regulating the content of deepfakes. Several bills would prohibit their circulation, including a measure backed by Senators Amy Klobuchar, D-Minn., Josh Hawley, R-Mo., Chris Coons, D-Del., Susan Collins, R-Maine, Pete Ricketts, R-Neb., and Michael Bennet, D-Colo., that would prohibit the distribution of AI generated material targeting a candidate for federal office.
Another backed by Sen. Richard Blumenthal, D-Conn., and Hawley would remove protections under Section 230 of a 1996 communications law for AI-generated content, which would force online platforms to face legal liability for posting such material, thereby likely forcing their removal.
Deepfakes have targeted presidential, congressional, and even local elections, said Blumenthal at a recent hearing of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.
Technology could help identify deepfakes by requiring AI companies to watermark or stamp any AI-generated content as having been produced using technology as opposed to audio and video generated by real humans. However, requiring such watermarking of campaign ads created by AI is no guarantee that all the creators of such ads will comply.
The challenge of confronting AI-generated deepfake ads and messages can’t be handled by states alone, said David M. Scanlan, New Hampshire’s secretary of state.
“At some point, I believe that there is a federal component to this, because it’s going to be a national problem,” Scanlan said. “You know these things in a national election are going to be generated nationally, whether it’s foreign actors or some other malicious circumstances.”
“And I think we need uniformity, and the power of the federal government to help put the brakes” on the use of AI generated deepfake campaign ads, Scanlan said.