Snapchat's AI chatbot faces public scrutiny and safety concerns.

1 min read
Source: Axios
Snapchat's AI chatbot faces public scrutiny and safety concerns.
Photo: Axios
TL;DR Summary

Companies offering generative AI like ChatGPT to the public are learning that users love discovering the technology's boundaries and pushing past them. Large language models powering these AI programs were trained on vast swaths of internet content, bringing along biases, stereotypes, and misinformation. To limit these problems, companies have tried to train their AI engines to observe "guardrails," but users often try to prompt chatbots to deliberately break them. Snapchat is tweaking its My AI chatbot to identify harmful abuses and restrict access for some accounts. Companies need to build systems strong enough to handle anything a user might type.

Share this article

Reading Insights

Total Reads

0

Unique Readers

1

Time Saved

2 min

vs 3 min read

Condensed

82%

565100 words

Want the full story? Read the original article

Read on Axios