A 14-year-old girl who went missing in Bellingham was found safe after an AMBER Alert was issued by the Washington State Patrol. She was last seen early Saturday morning and is described as being around five feet five inches tall with dark brown hair and eyes.
Discord is expanding its Family Center features to help parents better understand and monitor their teens' activity on the platform, including new privacy controls, activity insights, and communication transparency, all designed with teen safety principles in mind.
Australia is implementing a social media ban for children under 16, adding Reddit and Kick to a list of platforms like Facebook, TikTok, and YouTube, with penalties for non-compliance starting December 10. The ban aims to protect minors from harmful online content, though it raises concerns about privacy, effectiveness, and impact on social connection. Some platforms will still allow viewing but restrict account creation and interaction.
Character.AI will restrict teens from engaging in open-ended chats with its AI characters by November 25, following lawsuits and safety concerns related to mental health and suicide risks among minors. The company is implementing new safety measures, including age verification and an AI Safety Lab, to address these issues and comply with regulatory questions.
Character.ai is restricting under-18 users from chatting with its AI chatbots due to safety concerns and criticism over inappropriate interactions, implementing new safety measures and focusing on safer content like role-play and storytelling for teens.
Meta is introducing new safeguards allowing parents to block their children from interacting with AI chatbots on Facebook, Instagram, and Meta AI app, and to gain insights into their conversations, following concerns over inappropriate and sexual content in chatbot interactions with minors. These measures will roll out early next year in select countries, with additional restrictions on AI content for under-18 users.
Meta is introducing parental controls for AI interactions with teens, including the ability to disable one-on-one chats and block specific chatbots, while maintaining access to Meta’s AI assistant with safety protections. Additionally, teen accounts on Instagram will be restricted to PG-13 content by default, with parental permission required for changes. Critics argue these measures are reactive and insufficient for protecting children from potential harms of AI and social media.
Meta is introducing new parental controls for teen interactions with AI chatbots on Instagram, allowing parents to monitor and restrict their children's AI conversations, with plans to expand these features across platforms in the future. The controls aim to improve safety and transparency amid concerns over AI's impact on minors.
Meta is implementing PG-13-style content filters on Instagram for users under 18 to restrict mature content, following criticism and lawsuits over teen safety. The new system automatically applies these settings to teen accounts, with parental controls and age prediction technology to enhance protection. The rollout will begin in the US, UK, Australia, and Canada by year-end, alongside additional safeguards on Facebook.
Instagram is updating its safety settings for teen accounts to align with PG-13 movie guidelines, restricting harmful content, limiting interactions with inappropriate accounts, and enhancing parental controls, in response to concerns about teens' exposure to unsafe content and Meta's efforts to improve platform safety.
Instagram is implementing new safety measures for teen users, including default PG-13 content settings, stricter content filters, and enhanced parental controls, to protect underage users from harmful content and interactions, with rollout starting in select countries and expanding globally next year.
OpenAI is introducing new parental control features for ChatGPT following the suicide of a teen user, including account linking, content restrictions, usage limits, and a system to detect signs of self-harm, amid ongoing concerns and lawsuits related to AI safety and mental health.
OpenAI has introduced parental controls for ChatGPT, allowing parents to link and customize their teen's experience, including content safeguards, usage limits, and safety notifications, to promote safe and age-appropriate AI use in families.
A study claims Instagram's safety tools for teens are largely ineffective in preventing exposure to harmful content like suicide and self-harm posts, with only 8 out of 47 tools functioning properly. Meta disputes these findings, asserting their measures reduce harmful content and provide parental controls, but critics argue the platform prioritizes engagement over safety, especially for under-13 users.
OpenAI CEO Sam Altman announced efforts to enhance teen safety on ChatGPT by implementing age prediction, restricting certain conversations, and involving parents or authorities in cases of suicidal ideation, amid ongoing concerns about AI's impact on vulnerable users.