From the A bomb to the AI bomb, nuclear weapons’ problematic evolution

0
861

By Sébastian SEIBT

From autonomous nuclear submarines to algorithms detecting a threat, to robot-guided high-speed missiles, artificial intelligence could revolutionize nuclear weapons – risking some profound ethical conundrums – a recent report reveals.

At 2:26 A.M. on June 3, 1980, Zbigniew Brezezinski, US President Jimmy Carter’s famously hawkish national security adviser, received a terrifying phone call: 220 Soviet nuclear missiles were heading for the US. A few minutes later, another phone call offered new information: in reality, 2,220 missiles were flying towards the US.

Eventually, as Brezezinski was about to warn Carter of the impending doom, military officials realised that it was a gargantuan false alarm caused by a malfunctioning automated warning system. Thus, the Cold War nearly became an apocalypse because of a computer component not working properly.

This was long before artificial intelligence (AI) rose to prominence. But the Americans and Soviets had already begun to introduce algorithms into their control rooms in order to make their nuclear deterrence more effective. However, several incidents – most notably that of June 3, 1980 – show the disadvantages of using AI.

‘Novelty implies new vulnerabilities’

Almost forty years on from that near debacle, AI seems to have disappeared from the nuclear debate, even though such algorithms have become ubiquitous at every level of society. But a report by the Stockholm International Peace Research Institute (SIPRI) published on May 6 underlines the importance of this aspect.

The nuclear arms race still poses a considerable threat, seeing as Donald Trump’s America has promised to modernise its arsenal, North Korea seems uninterested in abandoning its nuclear programme, and relations are tense between neighbouring nuclear powers and historical antagonists India and Pakistan.

However, technological breakthroughs in AI show “enormous potential in nuclear power, as in the areas of conventional and cyber weapons”, said Vincent Boulanin, the researcher at SIPRI responsible for the report, in an interview with FRANCE 24. In particular, machine learning is “excellent for data analysis”, Boulanin continued. Such work could play an essential role in intelligence gathering and the detection of cyber attacks.

Russia resurrects Soviet AI system

“In truth, we know very little about the use of AI in nuclear weapons systems at present,” Boulanin admitted. Russia is the only world power to have brought up the issue recently, with President Vladimir Putin announcing in March 2018 the construction of a fully automated nuclear submarine called Poseidon. Furthermore, in 2011 Moscow resurrected and updated the Perimetr system, which uses artificial intelligence to be able (under certain conditions) to detect an atomic bomb by another state. But experts consider these announcements to be lacking in concrete details.

In part, such scepticism stems from the fact that “the adoption of new technologies in the nuclear field tends to be rather slow because novelty implies the possibility of new vulnerabilities”, Boulanin pointed out. Those in control of nuclear weapons programmes prefer to work on outdated computers instead of state-of-the-art technologies that are at risk of being hacked.

Nevertheless, Bounanin continued, it’s only a matter of time before the nuclear powers adopt AI in their weapons systems, considering the enticing prospects of such technology. Its main advantage is that algorithms are an awful lot faster than humans at processing information.

AI could also make guidance systems for missiles more accurate and more flexible, according to Boulanin. “This would be especially useful for high velocity systems that human can’t manoeuvre,” he said. Indeed, several countries are working on prototypes of hypersonic aircraft and missiles able to fly five times faster than the speed of sound. It would be impossible for humans to intervene on the trajectory of such missiles, while AI could correct the aim if necessary.

The dark side of AI in nuclear weapons

There is, however, a very dark side to AI. By nature, it implies the delegation of decision-making from humans to machines – which would carry serious “moral and ethical” implications, noted Page Stoutland, vice-president of the American NGO Nuclear Threat Initiative, which collaborated in the SIPRI report.

On this basis, “the guiding principle of respect for human dignity dictates that machines should generally not be making life-or-death decisions”, argued Frank Sauer, a nuclear weapons specialist at the University of Munich, in the SIRI study. “Countries need to take a clear stance on this” so that they don’t have robotic hands on the red button.

That’s while algorithms are created by humans and, as such, can reinforce the prejudices of their creators. In the US, AI used by the police to prevent reoffending has been shown to be “racist” by several studies. “It is therefore impossible to exclude a risk of inadvertent escalation or at least of instability if the algorithm misinterprets and misrepresents the reality of the situation,” pointed out Jean-Marc Rickli, a researcher at the Geneva Centre for Security Policy, in the SIRI report.

Risk of accidental use

Artificial intelligence also risks upsetting the delicate balance between the nuclear powers, warned Michael Horowitz, a defence specialist at the University of Pennsylvania, in the SIRI study: “An insecure nuclear-armed state would therefore be more likely to automate nuclear early-warning systems, use unmanned nuclear delivery platforms or, due to fear of rapidly losing a conventional war, adopt nuclear launch postures that are more likely to lead to accidental nuclear use or deliberate escalation.” That means that the US – which boasts the world’s largest nuclear stockpile – will be more cautious in adopting AI than a minor nuclear power such as Pakistan.

In short, artificial intelligence is a double-edged sword when applied to nuclear weapons. In certain respects, it could help to make the world safer. But it needs to be adopted “in a responsible way, and people needs to take time to identify the risks associated with AI, as well as pre-emptively solving its problems”, Boulanin concluded.

One sobering comparison might be with the financial services industry. Bankers used the same arguments – the promises of speed and reliability – to introduce AI to the sector as those used by its advocates in the nuclear weapons field. Yet the use of AI in trading rooms has led to some very unpleasant stock market crashes.

And of course,nuclear weapons will give AI much more to play with than mere money.