Google Takes Action Against AI-Generated Images in Search Results

The rise of AI-generated images has created a new challenge for Google search. These synthetic images are increasingly appearing in search results, often crowding out genuine images and making it difficult for users to find what they’re actually looking for. To address this issue, Google announced on Tuesday that it will start labeling AI-generated and AI-edited images in its search results.

This labeling system will be rolled out in the coming months and will be visible in the “About this image” window. It will be applied to Google Search, Google Lens, and Android’s Circle to Search features. Google is also extending this technology to its ad services and is considering adding a similar flag to YouTube videos. However, they’ve stated that more information on YouTube labeling will be provided later this year.

Google will rely on the Coalition for Content Provenance and Authenticity (C2PA) metadata to identify AI-generated images. This metadata helps track an image’s provenance, revealing details like its creation time, location, and the software and hardware used in its generation. Google joined C2PA as a steering committee member earlier this year.

While a number of industry heavyweights have joined C2PA, including Amazon, Microsoft, OpenAI, and Adobe, the standard itself has not received widespread adoption from hardware manufacturers. Currently, it is only found on a limited number of Sony and Leica camera models. Some prominent AI-generation tool developers have also opted not to adopt the standard, including Black Forrest Labs, the creator of the Flux model used by Grok for image generation.

The increasing use of AI-generated deepfakes in online scams has been a growing concern. The past two years have witnessed an explosion in the number of scams using AI-generated deepfakes. One notable example occurred in February when a Hong Kong-based financier was tricked into transferring $25 million to scammers who impersonated the company’s CFO during a video conference call. In May, a study by verification provider Sumsub revealed a 245% global increase in deepfake scams between 2023 and 2024, with a staggering 303% increase in the U.S. specifically.

This surge in deepfake scams has been attributed to the accessibility of AI-generation tools. As David Fairman, chief information officer and chief security officer of APAC at Netskope, explained to CNBC in May, the ease of access to these services has lowered the barrier for cybercriminals. They no longer require specialized technical skills to carry out these scams.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top