There is growing evidence that generative AI bots are devouring themselves because of lack of quality content generated by LLMs.
A recent study published in the journal Nature and reported in Forbes and The New York Times, are taking the threat seriously.
In the early days of computers we quickly learned about “garbage in, garbage out,” a reference to the fact that output is only as good as the facts that precede them. Today, we are learning that AI-generated responses can be unreliable, and piling them up on one another can lead to woeful inaccuracies of information and output, and a potentially implosive scenario. It is called “model collapse.”
Researchers at Cambridge and Oxford University led by Dr. Ilia Shumailov, a Google Scholar, discovered that when generative AI software relies solely on content produced by GenAI, the responses begin to degrade, according to a study published recently in Nature, the science journal.
All this AI-generated information can make it harder for us to know what’s real and accurate. It poses a problem for models generating AI responses, like ChaptGPT and Gemini, as well as those relying on them.

If the model is then trained on its own generated output (above), it might forget the more obscure dog breeds. ________
As they “scrape” the web for new data to train their models on — an increasingly challenging task — LLMs likely to ingest some of their own or others’ [unreliable] AI-generated content, creating an unintentional feedback loop in which the questionable output from one AI becomes the basis for training another.
“In the long run, this cycle may pose a threat to A.I. itself,” reports the New York Times.
Papered Over?
ABC News ran a recent article on model collapse and what it means. About 57% of all web-based text has been AI generated or translated through an AI algorithm, not human sources, according to a separate study from a team of Amazon Web Services researchers published in June.
“If human-generated data on the internet is quickly being papered over with AI-generated content and the findings of Shumailov’s study are true,” reports Forbes, “it’s possible that AI is killing itself—and the internet.”
Image source: Forbes vis Nature.com; nyt.com
