A fabricated audio clip purporting to be the voice of Philippine President Ferdinand Marcos Jr. has surfaced on YouTube, stoking tensions between the Philippines and China amidst their ongoing territorial disputes in the South China Sea. The misleading audio, paired with a slideshow of images depicting Chinese vessels, has been circulating online and falsely portrays Marcos Jr. as instructing the Philippine military to initiate action against an unnamed foreign nation. However, the Presidential Communications Office (PCO) has swiftly denounced the audio as entirely fake and a product of deepfake technology. In a statement, the PCO asserts that no such directive has been issued by the President. The office has expressed concern over the spread of misinformation and disinformation online and is actively working to combat it through its Media and Information Literacy Campaign. Commenting on the incident, Ramon Beleno III, head of the political science and history department at Ateneo De Davao University, believes that Beijing and its supporters in the Philippines are unlikely to be responsible for the fake audio, as it does not align with their interests in the region. Beleno urges individuals involved in spreading such content to refrain from actions that could exacerbate tensions between the Philippines and China. Meanwhile, PCO Assistant Secretary Dale De Vera emphasizes the seriousness of the deepfake, highlighting its potential impact on foreign policy. He confirms that while instances of deepfake content have occurred in the past, the latest incident stands out due to its dangerous nature, noting that it is the first to involve a potentially damaging topic. Aboy Paraiso, assistant secretary of the Department Of Information And Communications Technology, underscores the importance of addressing the deepfake issue and ensuring that those responsible are held accountable under the country’s cybercrime law. Jocel De Guzman, a cofounder of Scam Watch Philippines, emphasizes the need to educate the public about online fraud detection as AI-powered applications become more prevalent. He advises individuals to critically assess the source of media clips and examine them for inconsistencies, such as mismatched lip movements, blurred faces, and unnatural eye movements. Defense and security officials in the Philippines have long expressed concerns about the country’s cybersecurity preparedness. Last year, Defense Secretary Gilberto Teodoro Jr. cautioned military and security personnel against using AI-powered apps for personal portrait generation, citing the potential for misuse and malicious activities. Three lawmakers, Cavite representatives Lani Mercado-Revilla and Ramon Revilla III and Agimat Rep. Bryan Revilla, have proposed a bill imposing stricter penalties for crimes committed using deepfake technology. The bill defines a ‘deepfake’ as any manipulated audio, visual, or audiovisual recording that could reasonably be mistaken for an authentic representation of an individual’s speech or conduct. The bill highlights the potential for deepfakes to violate copyrights, data protection, defamation, and privacy. The Philippine government remains committed to addressing the proliferation of deepfakes and other potentially harmful AI-generated content through its Media and Information Literacy Campaign and collaboration with relevant stakeholders.