OpenAI's ChatGPT: From Emotional Discovery to Internet Access

TL;DR Summary
An AI safety engineer at OpenAI, Lilian Weng, shared her emotional experience after having a therapy session with OpenAI's chatbot, ChatGPT, in voice mode. While Weng's positive experience highlights the company's efforts to make AI appear more human, it also raises concerns about the potential dangers of relying on AI for mental health support. Previous attempts at AI therapy, such as Koko Bot and NEDA's chatbot Tessa, have faced criticism for being sterile or providing harmful information. The phenomenon of users becoming emotionally attached to AI programs, known as the Eliza Effect, has been observed since the early days of AI development.
- OpenAI Employee Discovers Eliza Effect, Gets Emotional Gizmodo
- Could ChatGPT be your new girlfriend? - The Post UnHerd
- Company behind ChatGPT announces its AI Chatbot can speak NBC News
- ChatGPT's voice feature sparks fierce debate about using it for therapy Business Insider
- OpenAI gives ChatGPT access to the entire internet VentureBeat
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
3 min
vs 4 min read
Condensed
85%
693 → 102 words
Want the full story? Read the original article
Read on Gizmodo