Microsoft Raises Concerns Over AI Use in Disinformation Campaigns

Microsoft President Brad Smith has expressed concerns about artificial intelligence (AI) being used to spread disinformation in the upcoming European Parliamentary elections.

Smith revealed that Microsoft has invested heavily in AI infrastructure in Sweden to tackle the issue, while acknowledging the growing use of AI-generated deepfakes in elections globally, including in India, the United States, Pakistan, and Indonesia. However, he noted that Microsoft has not detected any significant attempts to exploit AI in the European Parliamentary elections.

Despite the lack of widespread abuse, Smith urged caution, stating that the situation is ongoing and that premature declarations of victory would be unwarranted. He also highlighted the potential threat posed by deepfake videos, citing instances during India’s general election and a debunked Russian-language video circulated ahead of the European Parliament election.

Smith also mentioned the focus of Russian efforts on the Olympics and hinted at an upcoming report addressing related concerns.

Scarlett Johansson Claims OpenAI Used Her Voice Without Consent for ChatGPT 4

Actress Scarlett Johansson has accused OpenAI of using her voice without her consent for their new ChatGPT 4o chatbot. Johansson says that the voice, named “Sky,” is “eerily similar” to her own, and that she declined the company’s request to use her voice when they originally approached her. The situation has raised concerns about the use of deepfakes and the protection of personal identities in the era of artificial intelligence.

AI’s Deepfake Revolution in Indian Elections: Bringing Politicians Back to Life

As India gears up for the 2024 Lok Sabha elections, artificial intelligence (AI) is emerging as a powerful tool for political parties. AI-generated videos, voice cloning, and deepfakes are flooding social media, resurrecting deceased politicians and creating virtual avatars of living leaders. These tools are being used to engage young voters, create personalized messaging, and boost party morale. While experts debate the impact of AI on voters, concerns remain about the ethical use of deepfakes and the spread of misinformation.

“Election Integrity Our Highest Priority,” Says Google CEO Sundar Pichai

In light of upcoming elections in India and the US, Google CEO Sundar Pichai emphasizes the company’s unwavering commitment to election integrity, investing heavily in safeguarding its platforms like Search and YouTube. Pichai acknowledges the prevalence of deepfakes and the ongoing need for societal discernment in determining authenticity. Additionally, Elizabeth Reid, Head of Search at Google, highlights the company’s efforts to enhance information accessibility and user experience through advancements in AI.

India Poised to Shape Global AI Development, Says Google CEO Sundar Pichai

India has a significant opportunity to shape the future of artificial intelligence (AI), as the country is well-positioned to influence global AI development, according to Sundar Pichai, chief executive of Google and Alphabet. Speaking at a roundtable during Google’s annual conference, Google I/O, Pichai noted that technology shifts provide opportunities for emerging countries like India to catch up or leap ahead. He highlighted that India has skipped landlines and gone straight to mobile, and with each technology shift, there’s a chance to increase penetration. This trend is also true for AI, as India has a large and growing developer base that is actively engaged with AI platforms. However, Pichai also emphasized the responsibilities that come with the widespread adoption of AI, particularly regarding deepfakes and misinformation. Google is committed to election integrity and is taking steps to stay ahead of these problems through initiatives like SynthID and AI-assisted red teaming. Pichai stressed the need for diverse professional involvement in AI solutions, beyond engineers, including social scientists and philosophers, to ensure responsible development and address the societal implications of AI.

Connecticut Senate Approves Landmark AI Bias Mitigation Bill

In a significant step towards protecting citizens from bias and harm in AI decision-making, the Connecticut Senate has passed a comprehensive bill that addresses concerns over manufactured videos (deepfakes), discriminatory practices, and the need for transparency and accountability. Despite opposition from some lawmakers and industry representatives, the bipartisan bill passed with a 24-12 vote, marking a crucial step toward establishing uniform regulations for AI use across the nation. The legislation mandates digital watermarks on AI-generated images, creates an online AI Academy for public education, and requires certain AI users to implement policies to eliminate bias risks.

Connecticut Senate Advances Landmark AI Bias Mitigation Bill

Connecticut has taken a significant step in addressing AI bias with the Senate’s passage of a comprehensive bill that aims to protect individuals from discrimination and harmful practices in decision-making. The bill, which has been in development over two years, is considered one of the first major legislative proposals in the U.S. to address the growing concerns over AI bias and deepfakes.

Deepfake Ads Under Scrutiny Amid Rise in AI-Generated Political Messages

Deepfakes, which manipulate audio and video using artificial intelligence, are raising concerns ahead of the November presidential election. States like New Hampshire, Wisconsin, Florida, and Arizona are considering legislation to add transparency to AI-generated deepfake ads or calls, requiring disclaimers or prohibiting them within 90 days of an election. Congress is also considering measures to regulate the content of deepfakes, including prohibiting their circulation and removing legal protections for online platforms that post such content. Technology could help identify deepfakes, but challenges remain in ensuring compliance. States are facing limitations in handling deepfake challenges, and some believe federal involvement is necessary for a national solution.

States Move to Regulate AI-Generated Deepfakes Amid Election Concerns

As political campaigns intensify ahead of the November presidential election, states are considering measures to add transparency to AI-generated deepfake ads or calls. Deepfakes, which use artificial intelligence to create realistic audio and video of people saying or doing things they never did, have been used to target presidential, congressional, and even local elections. New Hampshire is one of at least 39 states considering legislation that would prohibit deepfakes within 90 days of an election unless they’re accompanied by a disclosure stating that AI was used. Wisconsin has passed a measure that requires political ads and messages produced using synthetic audio and video or made using AI tools to carry a disclaimer, while Florida has passed legislation that would make failure to disclose the use of AI-enabled messages a criminal misdemeanor punishable by up to a year in prison. Congress is also considering legislation to regulate deepfakes, including a measure that would prohibit their circulation and another that would remove protections under Section 230 of a 1996 communications law for AI-generated content.

AI’s Rise Poses New Cybersecurity Threats: Prompt Hacking and Private GPT Models

The proliferation of AI tools is having a significant impact on the cybersecurity landscape, leading to an increase in prompt hacking and the emergence of private GPT models without guardrails. These developments are enabling malicious actors to create more sophisticated attacks, highly credible scams, and deepfakes. Additionally, generative AI can accelerate the discovery of zero-day exploits and automate network intrusion attacks.

Scroll to Top