AI Image Generators Trained on Child Sexual Abuse Images, Stanford Study Reveals
Originally Published 2 years ago — by The Guardian

A study by the Stanford Internet Observatory has revealed that popular AI image generators, including Stable Diffusion, have been trained on thousands of images of child sexual abuse. The research identified over 3,200 suspected images of child sexual abuse in the LAION database, prompting LAION to temporarily remove its datasets. The presence of these illegal images is believed to be influencing the generation of harmful outputs by AI tools and contributing to the creation of explicit imagery of fake children. The Stanford Internet Observatory is calling for drastic measures, including the deletion of training sets based on LAION and the removal of older versions of Stable Diffusion from the internet.
