AI Image Generators Trained on Child Sexual Abuse Images, Stanford Study Reveals

A study by the Stanford Internet Observatory has revealed that popular AI image generators, including Stable Diffusion, have been trained on thousands of images of child sexual abuse. The research identified over 3,200 suspected images of child sexual abuse in the LAION database, prompting LAION to temporarily remove its datasets. The presence of these illegal images is believed to be influencing the generation of harmful outputs by AI tools and contributing to the creation of explicit imagery of fake children. The Stanford Internet Observatory is calling for drastic measures, including the deletion of training sets based on LAION and the removal of older versions of Stable Diffusion from the internet.
- AI image generators trained on pictures of child sexual abuse, study finds The Guardian
- Large AI Dataset Has Over 1000 Child Abuse Images, Researchers Find Bloomberg
- AI image-generators are being trained on explicit photos of children, a study shows The Associated Press
- Stanford study finds child abuse images in AI training data Axios
- Child sexual abuse images have been used to train AI image generators The Washington Post
Reading Insights
0
2
4 min
vs 5 min read
89%
970 → 110 words
Want the full story? Read the original article
Read on The Guardian