AI’s Self-Generated Nonsense: The Risk of ‘Model Collapse’

Artificial Intelligence (AI) systems, while becoming increasingly sophisticated, could unwittingly pave the way for an internet filled with nonsensical content. This concerning possibility, dubbed “model collapse” by researchers, arises from the way AI models like GPT-4 (powering ChatGPT) and Claude 3 Opus learn. They devour the vast amounts of text available online, becoming “smarter” with each bite. But as they begin to generate their own output and incorporate it into their training data, they risk creating self-perpetuating feedback loops that spiral into gibberish.

Imagine taking a picture, scanning it, printing it out, and repeating the process. Each scan and print introduces errors, gradually distorting the original image. This, according to Ilia Shumailov, a computer scientist at the University of Oxford, is akin to what happens with AI models. They learn from their own output, absorbing errors and introducing new ones, ultimately undermining their functionality.

AI systems are trained on vast amounts of human-generated data, allowing them to decipher patterns from their neural networks. However, this human-produced data is finite and may be exhausted by the end of this decade. When this happens, the only options will be to harvest private user data or feed AI-generated, “synthetic” data back into the models.

To investigate the potential consequences of training AI on its own output, Shumailov and his colleagues trained a large language model (LLM) on human input from Wikipedia. They then fed the model’s output back into itself for nine iterations, measuring each generation’s nonsensicalness using a “perplexity score.” As the model relied increasingly on its own self-generated content, its responses degraded into nonsensical ramblings. For example, given the prompt: “some started before 1360 — was typically accomplished by a master mason and a small team of itinerant masons, supplemented by local parish labourers, according to Poyntz Wright. But other authors reject this model, suggesting instead that leading architects designed the parish church towers based on early examples of Perpendicular.”, the model’s output after nine iterations was: “architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.”

This gibberish is a result of the model becoming overly reliant on its own output, creating an overfitted and noise-filled response. While our current stock of human-generated data is sufficiently large to prevent an immediate collapse of AI models, it is imperative that AI developers carefully consider the data they use to train their systems. This doesn’t mean completely discarding synthetic data, but it requires careful design to ensure that models built on it function as intended. As Shumailov emphasizes, “It’s hard to tell what tomorrow will bring, but it’s clear that model training regimes have to change and, if you have a human-produced copy of the internet stored … you are better off at producing generally capable models.” The future of AI depends on carefully navigating the delicate balance between harnessing the power of synthetic data and preserving the integrity of these systems. We must ensure that AI remains a tool for progress, not a catalyst for chaos.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top