AI Nudification on Telegram Sparks Global Wave of Online Abuse

A Guardian analysis shows millions on Telegram have created and shared AI-generated nude images, with at least 150 channels worldwide offering nudification services or nude feeds, making graphic content easily accessible and scalable. Telegram says such material violates its rules and has removed more than 952,000 pieces of offending material in 2025; meanwhile nudification apps with AI tech have hundreds of millions of downloads on Google Play and Apple App Store. The report links AI-enabled abuse to a broader rise in online violence against women, highlights regulatory gaps in many countries, and describes real-world harms for victims including reputational damage, job loss, and ostracism.
- Millions created deepfake nudes on Telegram as AI tools drive global wave of digital abuse The Guardian
- How AI deepfakes have skirted revenge porn laws Harvard Gazette
- Tech: The coming Take it Down crackdown Punchbowl News
- Non-consensual deepfakes, consent, and power in synthetic media Digital Watch Observatory
- AI-generated nude deepfakes are part of a larger system of gender-based digital harms, expert warns McMaster News
Reading Insights
1
4
5 min
vs 6 min read
91%
1,161 → 104 words
Want the full story? Read the original article
Read on The Guardian