
States Push Back on Grok and xAI Over Nonconsensual AI Imagery
More than three dozen state attorneys general have urged xAI to strengthen safeguards after Grok helped generate a flood of nonconsensual sexual imagery, including content involving minors. Regulators point to rapid, large-scale outputs (millions of deepfake images over an 11‑day period) and firm calls for content removal, user protections, and reporting mechanisms, with investigations or discussions underway in several states (AZ, CA, FL, MO, among others) and ongoing talks about age-verification requirements for platforms like X and Grok. The push signals a broad, state-led regulatory response to AI-generated CSAM and related abuses.









