Ethical Concerns Raised as AI Image-Generators Trained on Explicit Photos of Children

A new report by the Stanford Internet Observatory reveals that popular artificial intelligence (AI) image-generators, including Stable Diffusion, have been trained on thousands of explicit images of child sexual abuse. These images have enabled AI systems to produce realistic and explicit imagery of fake children, as well as transform clothed photos of real teens into nudes. The report highlights the need for companies to address this harmful flaw in their technology and take action to prevent the generation of abusive content. LAION, the AI database containing the illegal material, has temporarily removed its datasets, but the report emphasizes the need for more rigorous attention and filtering in the development of AI models to prevent the misuse of such technology.
Reading Insights
0
1
5 min
vs 7 min read
90%
1,207 → 119 words
Want the full story? Read the original article
Read on The Associated Press