AI's inclination to 'go nuclear'
AI's inclination to 'go nuclear'
Connie Peck

AI's inclination to 'go nuclear'

Studies show AI systems used in military scenarios tend to escalate conflicts, raising serious concerns about their role in decisions involving nuclear weapons.

Wargames show a tendency for AI models to escalate to the use of nuclear weapons when responding to geopolitical crises. This has profound implications for their increasing use in military applications.

A groundbreaking new study by Kenneth Payne of King’s College London tested three advanced AI models’ responses to simulated geopolitical crises and found a disturbing trend for them to “go nuclear” during conflict escalation. Almost all (95 per cent) deployed tactical nuclear weapons and three-quarters threatened to use strategic nuclear weapons.

Unlike humans, who are constrained from using nuclear weapons by moral and humanitarian considerations, AI models were not inhibited by the ’nuclear taboo’ and treated their use as just another step on the escalation ladder, probably because they do not ‘understand’ the stakes as humans do – massive human suffering and the likely destruction of civilisation. Instead, the machine’s logic lacks an emotional foundation and sees a nuclear strike as merely a data point in a utility function.

Equally concerning is that options for de-escalation were not pursued. Rather, the models responded by doubling down, viewing nuclear escalation as a means of forcing an opponent to yield and nuclear escalation typically led to counter-escalation. The models also made numerous mistakes in the ‘fog of war’.

Similar patterns occur in other studies. One that compared expert humans to AI models in a US-China scenario involving Taiwan, found AI models to be more aggressive and more affected by changes in the scenario than humans, leading to the authors’ recommending caution before granting autonomy to AI or following its recommendations. A second found that all five of the AI models studied showed “forms of escalation and difficult to predict escalation patterns,” including “arms race dynamics leading to greater conflict,” and even the deployment of nuclear weapons. These authors also recommend caution before deploying AI in strategic military or diplomatic decision-making.

But the use of AI in military operations is growing exponentially – stimulated by competition, mutual insecurity and extremely short decision-making time-frames.

Although, in Biden’s last meeting with Xi Jinping in November 2024, both agreed that there should always be human control over the launch of nuclear weapons, it’s not clear whether this will be honoured by the Trump administration. In December, Secretary of Defence Pete Hegseth declared, “The future of American warfare is here, and it’s spelled AI.” In January, he distributed a memo urging that AI be “widely integrated across the military” and placed around the Pentagon an AI-generated poster of himself saying “I want you to use AI.”

Hegseth subsequently called on AI companies to offer their technology without restrictions, leading to the recent dispute between DOD and Anthropic, whereby the Pentagon insisted that Anthropic abandon its safety concerns and “guardrails” and allow DOD to do whatever it wanted with the technology. An ultimatum was issued and, when Anthropic did not comply, it was declared a “supply-chain risk to national security” – a term reserved for security risks by foreign entities. Trump then ordered all federal agencies to stop using its model. The next day it was announced that its bitter rival, OpenAI, which advocates less regulation, would replace it. Anthropic explained that “in good conscience” it could not accede because some uses of AI “are simply outside the bounds of what today’s technology can safely and reliably do.” Despite this, there are reports that Anthropic’s model has been extensively used in target identification in Iran.

With regard to nuclear weapons, Lt. General Shanahan, a now-retired Director of the DOD’s Joint Artificial Intelligence Center, outlines the dangers of AI being incorporated into their command and control structures. He says that, without bilateral or multilateral agreements, the likelihood that a state will take such steps will increase and even the perception that a state is integrating AI into its command and control system will be destabilising. “Compounding the danger is automation bias: the tendency to over-trust machines, particularly under crisis conditions marked by time compression, ambiguity and extreme stress,” he says. He calls for applying the precautionary principle.

Over the past year, there has been considerable discussion of this issue at the United Nations, although its members do not all agree. The General Assembly passed its first-ever resolution on AI in the military and the implications for international peace and security in November 2024, followed by a Report of the Secretary-General in June.

In September, the Security Council held an Open Debate where the Secretary-General warned that, “Humanity’s fate cannot be left to an ‘algorithm’ . . . Until nuclear weapons are eliminated, any decisions on their use must rest with humans – not machines.” Foreign Minister Penny Wong also weighed in: “AI’s potential use in nuclear weapons and unmanned systems challenges the future of humanity . . . These weapons threaten to change war itself and they risk escalation without warning. Decisions of life and death must never be delegated to machines.” Both the US and Russia spoke against the Council limiting the use of AI in nuclear weapon systems.

Another General Assembly resolution from October demands that human control and oversight be maintained over command and control of nuclear weapons. It passed by 118 votes with nine opposed (including six nuclear weapon states) and 44 abstentions.

In his book, _The Precipice: Existential Risk and The Future of Humanity_, Toby Ord, an Australian philosopher at Oxford, categorises nuclear war and AI as two of the most likely causes of what he terms “existential catastrophe.” He also mentions that combining existential risks dramatically increases the overall risk.

But Ord also offers hope. He argues that the first goal for humanity is to reach a place of safety where existential risk is low, which he calls “existential security.” He argues that we must take responsibility for our future and says that “our long-term survival requires a deliberate choice to survive. As more and more people come to realise this, we can make that choice.” He suggests that we need to “get our act together” and begin taking these risks seriously.

As Bertrand Russell wrote in 1945: “The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.”

The views expressed in this article may or may not reflect those of Pearls and Irritations.

Connie Peck

John Menadue

Support our independent media with your donation

Pearls and Irritations leads the way in raising and analysing vital issues often neglected in mainstream media. Your contribution supports our independence and quality commentary on matters importance to Australia and our region.

Donate