The Dark Side of AI: Model Collapse and Human Exploitation
Originally Published 2 years ago — by MarkTechPost

Researchers from Britain and Canada introduce the phenomenon of model collapse, a degenerative learning process where models forget improbable events over time, even when no change has occurred. They provide case studies of model failure in the context of the Gaussian Mixture Model, the Variational Autoencoder, and the Large Language Model. Model collapse can be triggered by training on data from another generative model, leading to a shift in distribution. Long-term learning requires maintaining access to the original data source and keeping other data not produced by LLMs readily available over time.