OpenAI Warns of Mental Health Risks Linked to ChatGPT Usage

TL;DR Summary
OpenAI has disclosed that approximately 0.07% of ChatGPT users exhibit signs of mental health emergencies like suicidal thoughts, with 0.15% showing potential suicidal planning, prompting the company to implement safety responses and consult mental health experts amid legal scrutiny and concerns over AI's impact on vulnerable users.
- OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis BBC
- A Teen in Love With a Chatbot Killed Himself. Can the Chatbot Be Held Responsible? The New York Times
- More than a million people every week show suicidal intent when chatting with ChatGPT, OpenAI estimates The Guardian
- OpenAI maps out the chatbot mental health crisis Platformer
- OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week WIRED
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
2 min
vs 3 min read
Condensed
91%
528 → 47 words
Want the full story? Read the original article
Read on BBC