LLM Quality degradation and collapse

Some interesting links for more technical pals. Any discussion probably better at a roundtable/conference as I can’t find answers. Or maybe there are clever people to follow?

Thinking in terms of Systems Development Life Cycle (SDLC) - so we’re in risk management territory not software development…you can’t hand it over and rub your hands clean. Cradle to grave. Support, monitoring and maintenance.

AI Model Temporal Quality Degradation - layman’s terms? You can’t set-up a model and just leave it. It goes a bit loopy over time

Temporal quality degradation in AI models - Scientific Reports
Scientific Reports - Temporal quality degradation in AI models

AI Model Collapse - layman’s terms? If models start ingesting other generated data that can lead to collapse (a bit like the Habsburgs)

The Curse of Recursion: Training on Generated Data Makes Models Forget
Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

There’s a bit more coverage on that last one. A more digestible explanation here

Model collapse explained: How synthetic training data breaks AI
Discover the phenomenon of model collapse in AI. Learn why AI needs human-generated data and how to prevent model collapse.

Anything more recent is just ‘Oh no the internet is going to run out of content’

Which is not that helpful if you’re interested in risk management over a life cycle or wonder if it’s been solved or mitigated. Training them is pricey in many ways.

Subscribe to Gary P Shewan

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe