In a significant development concerning the intersection of artificial intelligence and politics, a company that used AI to mimic President Biden’s voice in deceptive robocalls to New Hampshire voters has agreed to pay a $1 million fine. The Federal Communications Commission (FCC) reached a settlement with Lingo Telecom, the voice service provider that transmitted the robocalls, after initially seeking a USD 2 million penalty. This case is being seen as a concerning early example of how AI could be used to manipulate voter behavior and potentially undermine democratic processes.
The deceptive phone messages were sent to thousands of New Hampshire voters on January 21st. These messages falsely claimed that voting in the state’s presidential primary would prevent voters from casting ballots in the November general election. Steve Kramer, the political consultant who orchestrated these calls, is still facing a proposed USD 6 million FCC fine and criminal charges related to voter suppression and impersonating a candidate. He had previously acknowledged paying a magician and self-described “digital nomad” to create the AI-generated recording, claiming he intended to highlight the potential dangers of AI and push lawmakers to address this issue.
The FCC’s action emphasizes the importance of verifying information online and the potential risks posed by deepfakes. FCC Chairperson Jessica Rosenworcel stated in a press release, “Every one of us deserves to know that the voice on the line is exactly who they claim to be. If AI is being used, that should be made clear to any consumer, citizen, and voter who encounters it. The FCC will act when trust in our communications networks is on the line.”
Lingo Telecom, while agreeing to the fine, has previously expressed disagreement with the FCC’s action, calling it a retroactive attempt to impose new regulations. However, consumer advocacy group Public Citizen praised the FCC’s stance, emphasizing the need for transparency and authenticity in online communication. Co-president Robert Weissman highlighted the existential threat deepfakes pose to democracy, stating that consumers have a right to know when they are receiving genuine content and when they are encountering AI-generated deepfakes.
The FCC’s enforcement bureau chief, Loyaan Egal, stressed the significant threat posed by the combination of caller ID spoofing and generative AI voice-cloning technology. This technology could be misused by both domestic actors seeking political advantage and foreign adversaries attempting to influence elections or conduct malicious activities. The case serves as a stark reminder of the growing need for safeguards against the potential misuse of AI and the importance of protecting the integrity of democratic processes.