AI Deepfakes Pose a Threat to Elections, But Ethics and Regulation Lag

As India’s staggered election begins, deepfake creator Divyendra Singh Jadoun reports a surge in requests from politicians seeking to use AI to create deceptive content. Despite declining unethical jobs, Jadoun anticipates many consultants will oblige, potentially distorting the world’s largest election with more than half a billion voters.

The explosion of AI tools is transforming democratic processes, enabling the creation of seamless fake media. Over half of the world’s population lives in countries hosting elections in 2024, making it a pivotal year for global democracies. While the extent of AI-generated political deepfakes remains unknown, experts observe an alarming uptick.

Amidst concerns, policymakers and regulators are racing to craft legislation restricting AI-powered audio, images, and videos on the campaign trail. However, a regulatory vacuum exists. The European Union’s AI Act takes effect after June parliamentary elections, and bipartisan legislation in the U.S. Congress is unlikely to become law before November elections. A handful of U.S. states have enacted laws penalizing deceptive videos, creating a fragmented policy landscape.

With limited guardrails and enforcers often outmatched by the rapid spread of fakes on social media, deterring politicians from using AI to mislead voters is challenging. The democratization of AI places the onus on individuals like Jadoun, not regulators, to make ethical choices and prevent AI-induced election chaos.

Historically, nation-state groups have leveraged misinformation on social media to influence elections. However, AI empowers smaller actors, making combating falsehoods more complex. The Department of Homeland Security has warned election officials about the potential misuse of generative AI in foreign-influence campaigns.

State-backed actors have already employed generative AI to meddle in Taiwan’s elections. On election day, a Chinese Communist Party-affiliated group posted AI-generated audio of a prominent politician endorsing another candidate, which was later removed by YouTube. While Taiwan ultimately elected a candidate opposed by the Chinese Communist Party, Microsoft anticipates China may use similar tactics in India, South Korea, and the United States.

Beyond state-backed actors, the low cost and widespread availability of generative AI tools enable individuals to engage in trickery. In Moldova, AI deepfake videos have depicted the pro-Western president resigning and urging support for a pro-Putin party. In South Africa, a digitally altered version of Eminem endorsed an opposition party, and in the United States, a Democratic operative faked President Biden’s voice to discourage primary voters.

The rise of AI deepfakes could disproportionately affect female candidates, who are often targeted with synthetic content. In Bangladesh, opposition politician Rumeen Farhana faced sexual harassment and a deepfake photo of her in a bikini, which drew harassing comments and may deter her and others from political participation.

In the absence of robust federal action, some states are taking the lead. About 10 states have adopted laws penalizing the use of AI to deceive voters, including Wisconsin and Michigan. However, the penalties may not be sufficient to deter offenders, and enforcement challenges persist.

Government officials and tech companies are seeking voluntary agreements to control the proliferation of AI-generated election content. The European Commission has urged political parties to resist manipulative techniques, but compliance is not mandatory. Social media platforms have been asked to label AI-generated productions, with mixed responses.

Large technology companies have pledged to collaborate on detecting and removing harmful AI content during the 2024 elections. OpenAI has sought partnerships with social media companies to address the distribution of AI-generated political materials. However, companies face no penalties for failing to fulfill their pledges, and gaps between OpenAI’s stated policies and enforcement have emerged.

Ultimately, Jadoun emphasizes the need for individual vigilance. “Any content that is making your emotions rise to a next level,” he says, “just stop and wait before sharing it.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top