Tag

Stanford Internet Observatory

All articles tagged with #stanford internet observatory

AI Image Generators Trained on Child Sexual Abuse Images, Stanford Study Reveals

Originally Published 2 years ago — by The Guardian

Featured image for AI Image Generators Trained on Child Sexual Abuse Images, Stanford Study Reveals
Source: The Guardian

A study by the Stanford Internet Observatory has revealed that popular AI image generators, including Stable Diffusion, have been trained on thousands of images of child sexual abuse. The research identified over 3,200 suspected images of child sexual abuse in the LAION database, prompting LAION to temporarily remove its datasets. The presence of these illegal images is believed to be influencing the generation of harmful outputs by AI tools and contributing to the creation of explicit imagery of fake children. The Stanford Internet Observatory is calling for drastic measures, including the deletion of training sets based on LAION and the removal of older versions of Stable Diffusion from the internet.

Ethical Concerns Raised as AI Image-Generators Trained on Explicit Photos of Children

Originally Published 2 years ago — by The Associated Press

Featured image for Ethical Concerns Raised as AI Image-Generators Trained on Explicit Photos of Children
Source: The Associated Press

A new report by the Stanford Internet Observatory reveals that popular artificial intelligence (AI) image-generators, including Stable Diffusion, have been trained on thousands of explicit images of child sexual abuse. These images have enabled AI systems to produce realistic and explicit imagery of fake children, as well as transform clothed photos of real teens into nudes. The report highlights the need for companies to address this harmful flaw in their technology and take action to prevent the generation of abusive content. LAION, the AI database containing the illegal material, has temporarily removed its datasets, but the report emphasizes the need for more rigorous attention and filtering in the development of AI models to prevent the misuse of such technology.