Ethical Concerns Raised as AI Image-Generators Trained on Explicit Photos of Children

1 min read
Source: The Associated Press
Ethical Concerns Raised as AI Image-Generators Trained on Explicit Photos of Children
Photo: The Associated Press
TL;DR Summary

A new report by the Stanford Internet Observatory reveals that popular artificial intelligence (AI) image-generators, including Stable Diffusion, have been trained on thousands of explicit images of child sexual abuse. These images have enabled AI systems to produce realistic and explicit imagery of fake children, as well as transform clothed photos of real teens into nudes. The report highlights the need for companies to address this harmful flaw in their technology and take action to prevent the generation of abusive content. LAION, the AI database containing the illegal material, has temporarily removed its datasets, but the report emphasizes the need for more rigorous attention and filtering in the development of AI models to prevent the misuse of such technology.

Share this article

Reading Insights

Total Reads

0

Unique Readers

1

Time Saved

5 min

vs 7 min read

Condensed

90%

1,207119 words

Want the full story? Read the original article

Read on The Associated Press