YouTube has terminated two popular channels, Screen Culture and KH Studio, for creating AI-generated fake movie trailers that violated policies on spam and misleading content, amid ongoing concerns over AI's impact on creative industries and copyright issues.
YouTube has shut down two channels, Screen Culture and KH Studio, for using AI to create fake movie trailers that misled viewers and violated platform policies, amid concerns over AI-generated content and copyright issues.
Indie game Horses by Santa Ragione was banned from both Steam and Epic Games Store shortly before its release, with Epic citing violations of content policies related to inappropriate and hateful content, despite the developer's protests and explanations. The game remains available on GOG and Itch.io as the studio seeks to recover its investment.
Originally Published 2 months ago — by Hacker News
The article discusses how AI-generated content and social media platforms contribute to 'brain rot,' a decline in critical thinking and meaningful engagement, with examples from Facebook, Reddit, and other online communities, highlighting concerns about misinformation, low-quality content, and the impact on human cognition.
Social media platforms like TikTok and X are being flooded with AI-generated videos depicting violence against women, highlighting ongoing challenges in moderating harmful content created with generative AI tools.
YouTube is rolling out a new AI 'likeness detection' tool for creators in its Partner Program to identify and report unauthorized uploads using their likeness, including deepfakes, with the feature currently in early access and expanding over the coming months.
Google is rolling out a likeness detection system on YouTube to help creators combat AI-generated deepfake videos that impersonate them, aiming to prevent misinformation and protect personal brands, though it requires users to verify their identity with personal information.
Instagram is updating its safety settings for teen accounts to align with PG-13 movie guidelines, restricting harmful content, limiting interactions with inappropriate accounts, and enhancing parental controls, in response to concerns about teens' exposure to unsafe content and Meta's efforts to improve platform safety.
OpenAI's Sora 2 social media platform, launched recently for AI-generated images, has faced backlash due to strict guardrails that restrict generating copyrighted characters and public figures, leading to user frustration and protests, amid ongoing legal challenges against AI image generators for copyright infringement.
OpenAI's Sora 2, a new text-to-video app, has sparked controversy due to its lack of initial safeguards, leading to the creation of inappropriate and illegal content, prompting the company to tighten restrictions and consider revenue-sharing models with rights holders.
OpenAI's new AI video app Sora 2, launched with a social feed, quickly became controversial as it generated violent, racist, and copyrighted content, highlighting concerns about the effectiveness of safeguards and the potential for misinformation and misuse in AI-generated media.
The widespread availability of graphic videos of Charlie Kirk's shooting highlights the challenges social media platforms face in moderating violent content quickly and effectively, amid ambiguous policies, algorithm-driven engagement, and varying regional regulations.
Roblox removed over 100 experiences related to the shooting of Charlie Kirk following viral videos and concerns over violent content, with social media platforms also taking steps to curb graphic content related to his death.
French authorities are suing the streaming platform Kick following the death of streamer Jean Pormanove, who endured abuse during live broadcasts. The case raises concerns about platform responsibility, with potential penalties including prison sentences and hefty fines, as France investigates Kick's compliance with content laws and the EU's Digital Services Act.
Preprint servers like PsyArXiv are combating suspicious submissions, including AI-generated content and paper mill outputs, by removing non-compliant manuscripts and increasing moderation efforts, amid rising concerns over the quality and authenticity of scientific publications influenced by AI tools.