OpenAI Halts AI Misuse by Russia, China, and Israel for Disinformation Campaigns

TL;DR Summary
OpenAI has disrupted five covert influence operations that attempted to misuse its AI models for deceptive activities, including generating fake comments and articles on various political issues. These operations involved actors from Russia, China, Iran, and Israel and aimed to manipulate public opinion. OpenAI emphasized that these campaigns did not gain significant engagement and announced the formation of a Safety and Security Committee to oversee future AI model training.
- OpenAI has stopped five attempts to misuse its AI for 'deceptive activity' Reuters
- OpenAI says Russian and Israeli groups used its tools to spread disinformation The Guardian
- In a first, OpenAI removes influence operations tied to Russia, China and Israel NPR
- OpenAI finds Russian and Chinese groups used its tech for propaganda campaigns The Washington Post
- OpenAI Says Russia and China Used Its A.I. in Covert Campaigns The New York Times
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
1 min
vs 2 min read
Condensed
79%
321 → 69 words
Want the full story? Read the original article
Read on Reuters