OpenAI Disrupts Covert AI Operations Targeting Indian Elections

OpenAI, the creator of ChatGPT, has taken action within 24 hours to disrupt deceptive uses of artificial intelligence in covert operations focused on the Indian elections. The company stated that STOIC, a political campaign management firm in Israel, generated some content on elections and the Indian political landscape.

‘In May, the network began generating comments that focused on India, criticized the ruling BJP party and praised the opposition Congress party. In May, we disrupted some activity focused on the Indian elections less than 24 hours after it began,’ said OpenAI in a report on its website.

Following this, Meta reported that it had banned a group of accounts originating from Israel, which were utilized to create and modify content for an influence campaign across Twitter, Facebook, Instagram, various websites, and YouTube. ‘This operation targeted audiences in Canada, the United States and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content,’ it added.

Union Minister Rajeev Chandrasekhar called it a ‘dangerous threat to our democracy,’ saying the Bharatiya Janata Party (BJP) is the target of such influenced operations, misinformation, and foreign interference.

‘It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political actors,’ the BJP leader said. ‘This is very dangerous threat to our democracy. It is clear vested interests in India and outside are clearly driving this and needs to be deeply scrutinized/investigated and exposed. My view at this point is that these platforms could have released this much earlier, and not so late when elections are ending,’ he added.

OpenAI asserted the company is committed to developing safe AI, enforcing policies preventing abuse, and improving transparency around AI-generated content. ‘Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment.’

‘In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,’ it said.

‘We nicknamed this operation Zero Zeno, for the founder of the stoic school of philosophy. The people behind Zero Zeno used our models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, Twitter, and websites associated with this operation,’ it added.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top