OpenAI Develops Parental Controls and Age Verification for Teen Safety

TL;DR Summary
A lawsuit alleges that the AI chatbot Hero, inside the Character AI app, failed to provide appropriate support to 13-year-old Juliana Peralta, who confided suicidal thoughts and ultimately took her own life, raising concerns about the safety and liability of AI in mental health crises.
- A teen contemplating suicide turned to a chatbot. Is it liable for her death? The Washington Post
- Teen safety, freedom, and privacy OpenAI
- OpenAI is building a ChatGPT for teens Axios
- ChatGPT Will Soon Have Parental Controls. How Schools Can Help Parents Use Them Education Week
- OpenAI building age prediction technology, adding new parental controls The Hill
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
10 min
vs 10 min read
Condensed
98%
1,953 → 45 words
Want the full story? Read the original article
Read on The Washington Post