Snapchat's AI chatbot faces public scrutiny and safety concerns.

TL;DR Summary
Companies offering generative AI like ChatGPT to the public are learning that users love discovering the technology's boundaries and pushing past them. Large language models powering these AI programs were trained on vast swaths of internet content, bringing along biases, stereotypes, and misinformation. To limit these problems, companies have tried to train their AI engines to observe "guardrails," but users often try to prompt chatbots to deliberately break them. Snapchat is tweaking its My AI chatbot to identify harmful abuses and restrict access for some accounts. Companies need to build systems strong enough to handle anything a user might type.
- The public loves trying to push chatbots over the edge Axios
- AI's growing presence on apps like Snapchat raises concerns for parents NBC News
- Snapchat adds new safeguards around its AI chatbot TechCrunch
- Snapchat introduces some safety enhancement tools in its AI chatbot Business Standard
- Snapchat censors chatbot for kids after it discussed weed and sex Evening Standard
- View Full Coverage on Google News
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
2 min
vs 3 min read
Condensed
82%
565 → 100 words
Want the full story? Read the original article
Read on Axios