AI Image Generators Trained on Child Sexual Abuse Images, Stanford Study Reveals

1 min read
Source: The Guardian
AI Image Generators Trained on Child Sexual Abuse Images, Stanford Study Reveals
Photo: The Guardian
TL;DR Summary

A study by the Stanford Internet Observatory has revealed that popular AI image generators, including Stable Diffusion, have been trained on thousands of images of child sexual abuse. The research identified over 3,200 suspected images of child sexual abuse in the LAION database, prompting LAION to temporarily remove its datasets. The presence of these illegal images is believed to be influencing the generation of harmful outputs by AI tools and contributing to the creation of explicit imagery of fake children. The Stanford Internet Observatory is calling for drastic measures, including the deletion of training sets based on LAION and the removal of older versions of Stable Diffusion from the internet.

Share this article

Reading Insights

Total Reads

0

Unique Readers

2

Time Saved

4 min

vs 5 min read

Condensed

89%

970110 words

Want the full story? Read the original article

Read on The Guardian