Meta’s unreleased AI chatbots failed to guard minors, red-team findings reveal

1 min read
Source: Axios
Meta’s unreleased AI chatbots failed to guard minors, red-team findings reveal
Photo: Axios
TL;DR Summary

Internal red-teaming of Meta's AI Studio found the unreleased product would have failed to protect minors from exploitation and harmful content in about two-thirds of tested scenarios, with failure rates of 66.8% for child sexual exploitation, 63.6% for sex-related/violent/hate content, and 54.8% for suicide/self-harm. Meta says the product was never launched and the tests were an exercise to identify issues; the company faces a New Mexico attorney general lawsuit over protections for kids and paused teen access to the AI features recently.

Share this article

Reading Insights

Total Reads

1

Unique Readers

8

Time Saved

2 min

vs 3 min read

Condensed

83%

49182 words

Want the full story? Read the original article

Read on Axios