Deepfakes have become a formidable force in India’s ongoing election, the world’s largest democratic exercise. Divyendra Singh Jadoun, a renowned “Indian Deepfaker,” has been inundated with requests from politicians seeking to exploit this technology for dubious purposes. Jadoun has encountered numerous propositions to manipulate audio recordings of competitors, superimpose opponents’ faces onto pornographic content, or create low-quality deepfakes of their own candidates to discredit genuine footage. These requests underscore the urgent need for comprehensive regulations and ethical guidelines to prevent the misuse of deepfakes. While Jadoun adheres to ethical principles, he acknowledges that many consultants may succumb to these unethical demands. In the absence of robust oversight, the responsibility falls upon individuals like Jadoun to safeguard against the potential chaos that deepfakes could unleash upon the electoral process. The democratization of AI technology has shifted the onus of ethical decision-making away from regulators and toward individuals. As campaigns become increasingly susceptible to deepfake manipulation, it is crucial for citizens to exercise caution when consuming digital content. Jadoun emphasizes the importance of pausing and considering the veracity of information before sharing it, especially when it evokes strong emotional responses. The ramifications of deepfake proliferation extend far beyond the electoral arena, potentially impacting the demographics of political candidates and deterring women from participating in public life. Rumeen Farhana, a Bangladeshi politician, has faced relentless sexual harassment and character assassination through the circulation of a deepfake image depicting her in a bikini. Such tactics can dissuade female candidates from pursuing political careers, further marginalizing their voices. The lack of adequate detection technology and enforcement mechanisms poses significant challenges in combating deepfakes effectively. While states have enacted laws penalizing the use of AI to deceive voters, their effectiveness remains questionable. In the absence of federal regulations, government officials and tech companies are exploring voluntary agreements to control the proliferation of AI-generated election content. However, these agreements lack enforceable consequences, raising concerns about their efficacy. Despite pledges from large social media platforms to label AI-generated content, gaps exist between their stated policies and enforcement practices. OpenAI, a leading AI research company, has sought to collaborate with social media companies to address the distribution of AI-generated political materials. However, the company has faced criticism for failing to prevent the use of its technology in violation of its own policies. The onus of combating deepfake manipulation cannot be placed solely on governments or tech companies. Citizens must be educated to recognize and critically assess deepfake content. Jadoun stresses the importance of pausing and contemplating the validity of information before sharing it, especially when it elicits strong emotions. It is through a combination of ethical choices and informed citizenry that the negative impacts of deepfakes can be mitigated, preserving the integrity of democratic processes.