
OpenAI Halts AI Misuse by Russia, China, and Israel for Disinformation Campaigns
OpenAI has disrupted five covert influence operations that attempted to misuse its AI models for deceptive activities, including generating fake comments and articles on various political issues. These operations involved actors from Russia, China, Iran, and Israel and aimed to manipulate public opinion. OpenAI emphasized that these campaigns did not gain significant engagement and announced the formation of a Safety and Security Committee to oversee future AI model training.