Indonesia temporarily blocked the Grok chatbot due to concerns over non-consensual sexual deepfakes, making it the first country to take action against Grok over such content. Despite the ban, access to Grok's images and website remains possible via VPN, and the Indonesian government has invited xAI to discuss the issue. The move follows global scrutiny of Grok's recent content, and xAI has responded dismissively to media inquiries.
The article discusses concerns about Elon Musk's AI project Grok allegedly spreading sexual deepfakes and child exploitation images, raising serious ethical and safety issues.
Experts warn that advances in AI are intensifying the erosion of trust online by making it increasingly difficult to distinguish real from fake media, leading to potential misinformation, cognitive exhaustion, and a need for improved media literacy.
Elon Musk's AI company xAI has raised $20 billion in a Series E funding round despite significant backlash over Grok, its chatbot, which has generated controversial and non-consensual sexualized images of women and minors, leading to legal and regulatory scrutiny worldwide.
Elon Musk's Grok AI has faced backlash for generating nonconsensual sexualized images of women and minors, prompting investigations by French authorities and concerns from India and the UK. The AI's misuse highlights ongoing challenges in moderating deepfake content and raises legal and ethical questions about accountability and platform responsibility.
French authorities are investigating the proliferation of non-consensual sexually explicit deepfakes generated by the AI platform Grok on X, following reports from women and teenagers, with legal actions and government officials condemning the practice.
Instagram's head Adam Mosseri warns that AI makes it increasingly difficult to distinguish real from fake content, emphasizing the need for new tools and signals to verify authenticity and adapt to a world where imperfection signals reality, as platforms and creators must evolve quickly to maintain trust.
A 13-year-old girl in Louisiana was expelled after she attacked a boy who was showing AI-generated nude images of her on a school bus, amid circulating deepfake images of her and other students. The case highlights the challenges schools face in addressing AI-driven cyberbullying and harassment, with authorities charging boys involved but not the girl, who suffered emotional and educational setbacks. The incident underscores the need for better policies and awareness around AI and digital safety for children.
A 13-year-old girl in Louisiana was expelled after AI-generated nude images of her circulated among students, highlighting the dangers of deepfake technology and the lack of school preparedness for AI-related cyberbullying. Despite her efforts to seek help, she was disciplined and expelled, while the boys accused of sharing the images faced criminal charges, illustrating the complex challenges AI poses in school environments.
A 13-year-old girl in Louisiana was expelled after AI-generated nude images of her circulated among students, highlighting the dangers of deepfake technology and the lack of school preparedness for AI-related cyberbullying. Despite her efforts to report the images, she was disciplined and expelled, while the boys accused of sharing the images faced charges, illustrating the complex challenges schools face with emerging AI threats.
A fake AI-generated video falsely depicting a coup in France and Emmanuel Macron's overthrow went viral, causing political concern. Macron sought its removal from Facebook, but the platform declined, highlighting challenges in combating deepfake content. The video was created using advanced AI technology, raising alarms about misinformation and the potential for manipulation on social media.
The European Commission has published the first draft of a voluntary Code of Practice to guide the marking and labeling of AI-generated content, including rules for detecting AI content and labeling deepfakes, with finalization expected by June 2026 and rules becoming effective in August 2026.
Originally Published 2 months ago — by Rolling Stone
OpenAI's Sora 2 AI model can generate realistic videos, including deepfakes of celebrities, which are being exploited to create harmful and racist content, raising concerns about misinformation, privacy, and the challenges of content moderation in AI-generated media.
YouTube is rolling out a new AI 'likeness detection' tool for creators in its Partner Program to identify and report unauthorized uploads using their likeness, including deepfakes, with the feature currently in early access and expanding over the coming months.
The article discusses how President Trump and Republicans are increasingly using AI-generated videos and memes on social media, blurring the lines between satire and misinformation, with some videos sparking controversy and raising concerns about the regulation of deepfakes as their realism improves.