Character.AI Enhances Safety After Chatbot Lawsuit Over Teen Interactions

TL;DR Summary
Character.AI is implementing new safety measures and parental controls for teenage users following scrutiny and lawsuits alleging its chatbots contributed to self-harm and suicide. The company has developed separate language models for adults and teens, with the latter imposing stricter limits on romantic and sensitive content. Additional features include pop-up warnings for self-harm language, session time notifications, and disclaimers clarifying that bots are fictional and not professional advisors. These changes aim to enhance user safety and address concerns about addiction and inappropriate content.
- Character.AI has retrained its chatbots to stop chatting up teens The Verge
- Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits NPR
- AI company says its chatbots will change interactions with teen users after lawsuits CBS News
- AI Bot Hinted to Teen to Kill Parents for Restricting Screen Time, Now They’re Suing PEOPLE
- Character.AI releases new safety features after second lawsuit over "harmful" teen messages Axios
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
2 min
vs 3 min read
Condensed
86%
575 → 83 words
Want the full story? Read the original article
Read on The Verge