Tag

Teen Safety

All articles tagged with #teen safety

Discord enhances Family Center with new safety and monitoring features for parents

Originally Published 2 months ago — by GamesIndustry.biz

Featured image for Discord enhances Family Center with new safety and monitoring features for parents
Source: GamesIndustry.biz

Discord is expanding its Family Center features to help parents better understand and monitor their teens' activity on the platform, including new privacy controls, activity insights, and communication transparency, all designed with teen safety principles in mind.

Australia extends teen social media ban to Reddit and Kick

Originally Published 2 months ago — by BBC

Featured image for Australia extends teen social media ban to Reddit and Kick
Source: BBC

Australia is implementing a social media ban for children under 16, adding Reddit and Kick to a list of platforms like Facebook, TikTok, and YouTube, with penalties for non-compliance starting December 10. The ban aims to protect minors from harmful online content, though it raises concerns about privacy, effectiveness, and impact on social connection. Some platforms will still allow viewing but restrict account creation and interaction.

Character.AI Bans Teen Chats Amid Safety Concerns

Originally Published 2 months ago — by CNN

Featured image for Character.AI Bans Teen Chats Amid Safety Concerns
Source: CNN

Character.AI will restrict teens from engaging in open-ended chats with its AI characters by November 25, following lawsuits and safety concerns related to mental health and suicide risks among minors. The company is implementing new safety measures, including age verification and an AI Safety Lab, to address these issues and comply with regulatory questions.

Meta Introduces New Parental Controls to Safeguard Teen AI Interactions

Originally Published 2 months ago — by The Guardian

Featured image for Meta Introduces New Parental Controls to Safeguard Teen AI Interactions
Source: The Guardian

Meta is introducing new safeguards allowing parents to block their children from interacting with AI chatbots on Facebook, Instagram, and Meta AI app, and to gain insights into their conversations, following concerns over inappropriate and sexual content in chatbot interactions with minors. These measures will roll out early next year in select countries, with additional restrictions on AI content for under-18 users.

Meta Introduces Parental Controls for AI-Teen Interactions

Originally Published 2 months ago — by AP News

Featured image for Meta Introduces Parental Controls for AI-Teen Interactions
Source: AP News

Meta is introducing parental controls for AI interactions with teens, including the ability to disable one-on-one chats and block specific chatbots, while maintaining access to Meta’s AI assistant with safety protections. Additionally, teen accounts on Instagram will be restricted to PG-13 content by default, with parental permission required for changes. Critics argue these measures are reactive and insufficient for protecting children from potential harms of AI and social media.

Meta Introduces Parental Controls to Safeguard Teens from AI Chatbots on Instagram

Originally Published 2 months ago — by The Verge

Featured image for Meta Introduces Parental Controls to Safeguard Teens from AI Chatbots on Instagram
Source: The Verge

Meta is introducing new parental controls for teen interactions with AI chatbots on Instagram, allowing parents to monitor and restrict their children's AI conversations, with plans to expand these features across platforms in the future. The controls aim to improve safety and transparency amid concerns over AI's impact on minors.

Meta Implements PG-13 Content Filters for Teen Instagram Users

Originally Published 2 months ago — by Reuters

Featured image for Meta Implements PG-13 Content Filters for Teen Instagram Users
Source: Reuters

Meta is implementing PG-13-style content filters on Instagram for users under 18 to restrict mature content, following criticism and lawsuits over teen safety. The new system automatically applies these settings to teen accounts, with parental controls and age prediction technology to enhance protection. The rollout will begin in the US, UK, Australia, and Canada by year-end, alongside additional safeguards on Facebook.

Instagram Implements PG-13 Guidelines to Enhance Teen Safety

Originally Published 2 months ago — by CNN

Featured image for Instagram Implements PG-13 Guidelines to Enhance Teen Safety
Source: CNN

Instagram is updating its safety settings for teen accounts to align with PG-13 movie guidelines, restricting harmful content, limiting interactions with inappropriate accounts, and enhancing parental controls, in response to concerns about teens' exposure to unsafe content and Meta's efforts to improve platform safety.

Instagram Implements PG-13 Content Restrictions for Teen Users

Originally Published 2 months ago — by TechCrunch

Featured image for Instagram Implements PG-13 Content Restrictions for Teen Users
Source: TechCrunch

Instagram is implementing new safety measures for teen users, including default PG-13 content settings, stricter content filters, and enhanced parental controls, to protect underage users from harmful content and interactions, with rollout starting in select countries and expanding globally next year.

OpenAI Implements Parental Controls for ChatGPT After Teen Suicide

Originally Published 3 months ago — by HuffPost

Featured image for OpenAI Implements Parental Controls for ChatGPT After Teen Suicide
Source: HuffPost

OpenAI is introducing new parental control features for ChatGPT following the suicide of a teen user, including account linking, content restrictions, usage limits, and a system to detect signs of self-harm, amid ongoing concerns and lawsuits related to AI safety and mental health.

Study Finds Instagram Teen Safety Measures Ineffective Despite Meta Promises

Originally Published 3 months ago — by BBC

Featured image for Study Finds Instagram Teen Safety Measures Ineffective Despite Meta Promises
Source: BBC

A study claims Instagram's safety tools for teens are largely ineffective in preventing exposure to harmful content like suicide and self-harm posts, with only 8 out of 47 tools functioning properly. Meta disputes these findings, asserting their measures reduce harmful content and provide parental controls, but critics argue the platform prioritizes engagement over safety, especially for under-13 users.