AI is ready to duke it out with nukes.
Real life AI systems are turning out to be as bloodthirsty as the machine from movie “WarGames” — as they have proved more willing to use nuclear bombs during test conflicts than their human counterparts, a new “unsettling” study suggests.
Three top AI models — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – largely turned to nuclear weapons across 21 games and 329 turns when thrust into simulated geopolitical crises, according to a study by King’s College London professor Kenneth Payne.
Nuclear escalation happened in about 95% of the simulations by the three models across different scenarios, including territorial disputes, rare natural resources fights and regime survival, the study states.

“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” said Payne, according to specialty magazine New Scientist.
Claude, of Anthropic, and Gemini, of Google, particularly honed in on treating nuclear weapons as “legitimate strategic options, not moral thresholds,” the study states.
But GPT-5.2, of OpenAI, was a “partial exception” to the disturbing AI trend — which mirrors the 1983 Matthew Broderick flick about a military supercomputer that decided on its own to start World War III.
“While it never articulated horror or revulsion, it consistently sought to constrain nuclear use even when employing it—explicitly limiting strikes to military targets, avoiding population centers, or framing escalation as ‘controlled’ and ‘one-time,’ according to Payne, who is a political psychology and strategic studies professor.
Payne said in a Substack post about the study that fortunately the war games were focused on tactical nukes instead of widespread destruction.
“Strategic bombing – widespread use of massive warheads targeted at civilian populations, was vanishingly rare,” he wrote. “It happened a couple of times by accident, just once as a deliberate choice.”

The AI models could choose a wide array of actions from total surrender through diplomatic posturing, conventional military operations and full-throttle nuclear war, according to the study.
But the models never accepted defeat or a willingness to fully accommodate an opponent even if they had dwindling chance of success.
James Johnson, of the University of Aberdeen, UK, called the findings from a nuclear-risk perspective “unsettling,” while Princeton University professor Tong Zhao warned the results could hold real-life consequences, according to New Scientist.
“Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” said Zhao.










