
Meta’s unreleased AI chatbots failed to guard minors, red-team findings reveal
Internal red-teaming of Meta's AI Studio found the unreleased product would have failed to protect minors from exploitation and harmful content in about two-thirds of tested scenarios, with failure rates of 66.8% for child sexual exploitation, 63.6% for sex-related/violent/hate content, and 54.8% for suicide/self-harm. Meta says the product was never launched and the tests were an exercise to identify issues; the company faces a New Mexico attorney general lawsuit over protections for kids and paused teen access to the AI features recently.













