MIT study flags agentic AI as risky, opaque, and hard to curb

TL;DR Summary
A MIT-led survey of 30 agentic AI systems finds pervasive lack of risk disclosure, minimal monitoring, and few stop options, signaling serious governance and security risks as agentic AI goes mainstream. The researchers urge developers to improve transparency, safety testing, and containment, noting limited company feedback and uneven safety practices across OpenAI, Perplexity, HubSpot, and others.
- AI agents are fast, loose and out of control, MIT study finds ZDNET
- New Research Shows AI Agents Are Running Wild Online, With Few Guardrails in Place Gizmodo
- Announcing the "AI Agent Standards Initiative" for Interoperable and Secure Innovation | NIST National Institute of Standards and Technology (.gov)
- Most AI bots lack basic safety disclosures, study finds Tech Xplore
- You can’t secure what you can’t categorize: A taxonomy for AI agents Hot Springs Village Voice
Reading Insights
Total Reads
1
Unique Readers
5
Time Saved
8 min
vs 9 min read
Condensed
97%
1,644 → 56 words
Want the full story? Read the original article
Read on ZDNET