Google’s AI Tool Sparks Concerns with Misinformation

Google has recently incorporated an AI tool into its search engine, known as ‘AI Overviews.’ However, this new feature has reportedly provided misleading information to users, as documented by various social media and news reports. In a particularly concerning instance, the AI tool suggested consuming rocks when a user searched ‘I’m feeling depressed.’ This experimental tool gathers information from the internet to summarize search results using an AI model. It has been made available to select users in the U.S., with a planned worldwide release later this year, as announced by Google on May 14 at its I/O conference. However, the tool has already sparked widespread dismay across social media, with users claiming that AI Overviews has occasionally generated summaries using articles from satirical websites such as The Onion and comedic Reddit posts. ‘You can also add about 1⁄8 cup of non-toxic glue to the sauce to give it more tackiness,’ AI Overviews reportedly advised in response to a query about pizza, as per a shared screenshot. Tracing the response back, it appears to be based on a decade-old joke comment made on Reddit. Other erroneous claims include assertions that Barack Obama was a founding father, John Adams graduated from the University of Wisconsin, a dog played in the Super Bowl, and that dishwasher detergent can be used to improve digestion. Live Science was unable to independently verify these posts. Google representatives, when asked about the prevalence of inaccurate results, stated that the examples observed were ‘generally very uncommon queries, and aren’t representative of most people’s experiences.’ ‘The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,’ the statement read. ‘We conducted extensive testing before launching this new experience to ensure AI overviews meet our high bar for quality. Where there have been violations of our policies, we’ve taken action — and we’re also using these isolated examples as we continue to refine our systems overall.’ This is not the first instance of generative AI models fabricating information, a phenomenon referred to as ‘hallucinations.’ In a previous notable case, ChatGPT and LLaMA falsely identified an actual law professor as the perpetrator, citing fictitious newspaper reports as evidence.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top