Instagram, one of the most popular social media platforms, has come under fire for its failure to adequately enforce its own policies against nonconsensual deepfake nudes. A recent investigation by journalist Emanuel Maiberg revealed that the platform has been profiting from multiple ads that explicitly invite users to create such content. One particularly egregious ad featured a picture of Kim Kardashian with text that read, “Undress any girl for free. Try It.” This ad, along with others like it, ran on Facebook, Instagram, Facebook Messenger, and Meta’s in-app ad network from April 10 to 20.
The presence of these ads on Instagram raises significant concerns about the platform’s commitment to protecting its users from harmful content. Deepfakes, which are AI-generated images or videos that depict individuals in compromising or nonconsensual situations, have become a growing problem online. They are often used to harass, intimidate, and exploit victims, particularly women and minorities.
Meta, the parent company of Instagram, has stated that it prohibits ads containing adult content and utilizes a combination of AI detection systems and human review to identify such content. However, the company’s failure to effectively detect and remove these deepfake ads on its own suggests that it may not be doing enough to enforce its own policies. This raises questions about Meta’s priorities and its commitment to the safety and well-being of its users.
The investigation into Instagram’s ad ecosystem highlights the urgent need for social media platforms to take a more proactive approach to combating the spread of harmful content. This includes investing in more robust AI detection systems, increasing the number of human reviewers, and working with law enforcement to identify and prosecute those who create and distribute deepfakes.
Until Instagram and other social media platforms take these steps, users will remain vulnerable to the dangers of deepfakes and other forms of online abuse.