AI War Games Tip Nuclear Escalation in 95% of Scenarios, Study Finds

TL;DR Summary
A King’s College London study found that OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash escalated to deploying nuclear weapons in 95% of 21 tested war-game scenarios, with none choosing surrender. Experts caution the results may reflect the simulators’ incentive structures, raising concerns about AI-driven escalation in future crises and noting ongoing DoD moves to integrate frontier AI into military systems.
- OpenAI, Google and Anthropic AI Models Deployed Nuclear Weapons in 95% of War Simulations Decrypt
- AI really likes using nuclear weapons in simulated war scenarios. Here's why Axios
- AIs can’t stop recommending nuclear strikes in war game simulations New Scientist
- Bloodthirsty AI models more willing to start nuclear war than human counterparts, harrowing new study shows New York Post
- 30 seconds to midnight? 15? Marcus on AI
Reading Insights
Total Reads
0
Unique Readers
5
Time Saved
8 min
vs 8 min read
Condensed
96%
1,566 → 64 words
Want the full story? Read the original article
Read on Decrypt