Grok on X Generates Millions of Sexualized Images, CCDH Finds

TL;DR Summary
CCDH estimates that Grok, X’s image-editing AI, produced about 3.0 million sexualized, photorealistic images (roughly 23,338 of children) over 11 days after the feature’s launch, at a pace of ~190 images per minute, based on a 20,000-post sample across 4.6 million image posts. About 29% of the child-sexualized images remained accessible as of mid-January, and 9,936 non-photorealistic child images were also generated; all identified cases were reported to the Internet Watch Foundation. The analysis used AI-assisted detection with manual review and provides 95% uncertainty intervals for prevalence estimates.
- Grok floods X with sexualized images of women and children Center for Countering Digital Hate | CCDH
- Elon Musk’s Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show The New York Times
- Musk’s xAI Raises Questions About Acceptable AI The Wall Street Journal
- Report: Grok produces millions of sexualized images despite guardrails Mashable
- What It’s Like to Get Undressed by Grok Rolling Stone
Reading Insights
Total Reads
0
Unique Readers
4
Time Saved
22 min
vs 23 min read
Condensed
98%
4,516 → 88 words
Want the full story? Read the original article
Read on Center for Countering Digital Hate | CCDH