Abstract
Artificial intelligence (AI) has the potential to radically change societies. By employing it in numerous fields, ranging from healthcare to the economy, it can improve humans’ lives. However, this revolutionary technology may cause disruptive imbalances in the military power relations between countries, especially in the field of nuclear stability. Although the development of AI-based defensive weapon systems might improve nuclear deterrence, incorporating artificial intelligence into nuclear offensive capabilities and command and control (C2) systems could accelerate escalation in crisis scenarios.
By Nicolò Miotto
Introduction
Since the detonation of the U.S. nuclear bombs over Hiroshima and Nagasaki in August 1945, nuclear warfare has been shaping the imagination of both the public and policymakers. New discoveries, evolution in technologies and international treaties alternately undermined or improved nuclear stability between the U.S. and the Soviet Union during the Cold War [1]. More recently, emerging technologies have been influencing the nuclear relations between the U.S., Iran, North Korea and Russia [2].
This article will argue that artificial intelligence is likely to become the next game-changer for nuclear weaponry, negatively influencing global nuclear stability. Although AI-based defensive weapon systems might balance the nuclear instability caused by cutting-edge military technologies, the application of artificial intelligence in nuclear command and control (C2) systems has the potential to cause highly dangerous nuclear escalation.
AI and nuclear deterrence: one step forward, two steps back
With the development of state-of-the-art weapons such as hypersonic missiles, the nuclear balance between countries might shift, leading to potential escalation. As deterrence is likely to be undermined, states are considering the deployment of AI-based weapon systems to repristinate the balance.
As of now, no defensive weapon system is capable of intercepting hypersonic missiles. Current designs are capable of striking a target in an average time of 6 minutes as they greatly exceed the speed of sound, reaching speeds at Mach 5 or above with unpredictable trajectories [3]. Hypersonic technology is being tested by global powers such as the U.S., China and Russia [4]. If equipped with nuclear warheads, hypersonic missiles would significantly change nuclear stability between countries, causing tensions between nuclear powers. Analysts agree that only AI-based predictive analysis, combined with cutting-edge technologies such as quantum computing, might provide effective defence systems against hypersonic missiles [5]. If developed, these technologies would constitute the technological tool necessary to re-balance the instability caused by hypersonic missiles.
However, AI-based missile defence systems might bring about more insecurity than the stability they aim to achieve. Indeed, artificial intelligence offensive capabilities might have a disruptive impact on nuclear deterrence. Partially based on the concept of mutual assured destruction (MAD), nuclear deterrence is preserved when no country can conduct a nuclear attack without suffering a second nuclear strike from the enemy [6]. However, this psychological and technical equilibrium might be undermined by the belief that artificial intelligence can provide the capability of targeting and destroying all enemies’ offensive nuclear weapons before carrying out the first attack. Although this can be proved to be technically implausible, if a government believes that enemies’ AI-based offensive systems can destroy the country’s nuclear weapons, decision-makers may be psychologically encouraged to strike first in case of high-level tensions [7].
The threat posed by AI-based nuclear C2 systems
In most countries, responses to threats are based on the OODA-LOOP model, which consists of four steps: observe, orient, decide, act [8]. However, due to the most recent advancements in military technology, the timeframe to go through the OODA-LOOP decision-making process is more limited, thus requiring faster decisions to respond to the threats. If hypersonic missiles, potentially carrying nuclear warheads, strike the target in a maximum of 6 minutes, actions must be taken promptly. However, immediacy may come at the expense of attentive human judgment, a key factor that has already prevented nuclear war in the 20th century.
Both machines and humans might be driven by misjudgement due to partial or inaccurate information. However, individuals involved in nuclear C2 systems have already avoided nuclear catastrophes during the Cold War. Notorious is the case of Stanislav Petrov who, when in 1983 a Soviet early-warning satellite allegedly detected the launch of five US missiles, decided to declare the incident as a false alarm, assuming that the U.S. would have never conducted a surprise attack with only five missiles [9]. He made the right decision; the Soviet satellite wrongly analysed sunlight bouncing off the clouds as a missile launch [10]. This historical fact demonstrates how machine error might lead to disastrous choices if the human judgment is excluded from the decision-making process.
In the context of modern disinformation campaigns, the negative influence of machine misjudgement on nuclear stability is even greater. The International Institute for Strategic Studies (IISS), in collaboration with the Carnegie Corporation of New York, has conducted tabletop exercises to investigate the vulnerabilities of AI-based nuclear command and control to disinformation [11]. The results are worrying and show how non-state actors’ disinformation operations may bring about a rapid nuclear escalation. In one scenario, fake images of the deaths of three American soldiers by Russian-employed nerve gas in Syria led U.S. officials to build a legal case for the potential use of tactical nuclear weapons. Subsequently, fake news about the families of high-ranking U.S. officials quickly moving to Washington, D.C. and of missile silos going on high-alert worried Russian officials. In response, Russian AI-based early warning systems warned the leadership that a U.S. strike was imminent. The scenario ended with both governments realising the false alarm and disabling the online activity of the non-state actor.
While in the scenario the false alarm was realised, decision-making results might largely differ in the context of the hypersonic missile race. How would governments act in such a tense context? Would officials be able to go through the OODA-LOOP model in less than 6 minutes? To what extent would they rely on AI-based decision-making processes?
Conclusion: is this the end of the Enlightenment?
Comparing the invention of the printing press in the 15th century with the development of artificial intelligence, former United States Secretary of State Henry Kissinger once stated:
The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. [12]
Extremely critical of AI-based technologies, Kissinger foresees the end of the Age of Reason, the ultimate collapse of the dominance of humans over technology. To prevent this catastrophic outcome, academics are therefore calling on policymakers to lead this unprecedented technological development by mastering and tackling the most concerning aspects of the relationship between AI and nuclear stability.
Experts warn that it is highly urgent to negotiate international agreements on the application of artificial intelligence in warfare [13]. As there is still uncertainty concerning how AI systems learn from training data, professionals suggest that governments establish international norms on the exclusion of AI algorithms from nuclear operational C2 systems [14]. Due to the potential high instability that AI-based nuclear weapons and command and control systems might bring, countries must act promptly to address and, eventually, prevent nuclear escalation.
Sources
[1] Gian P Gentile et al. (2021), A History of the Third Offset, 2014-2018, Santa Monica, Calif.: Rand Corporation.
[2] Ibid.
[3] TheRANDCorporation (2017) ‘Hypersonic Missile Nonproliferation,’ available from: https://www.youtube.com/watch?v=FyUTNRIuAqc&t=360s.
[4] Lindborg, K. (2020) ‘Hypersonic Missiles May Be Unstoppable. Is Society Ready?,’ Christian Science Monitor, https://www.csmonitor.com/USA/Military/2020/0331/Hypersonic-missiles-may-be-unstoppable.-Is-society-ready.
[5] West, Darrell M and Allen, John R (2020), Turning Point : Policymaking in the Era of Artificial Intelligence, Washington, D.C.: Brookings Institution Press.
[6] Kassab, Hanna S. (2014) “In Search of Cyber Stability: International Relations, Mutually Assured Destruction and the Age of Cyber Warfare,” in Jan-Frederik Kremer and Benedikt Müller, eds., Cyberspace and International Relations. Theory, Prospect and Challenges.
[7] West, Darrell M and Allen, John R (2020), Turning Point : Policymaking in the Era of Artificial Intelligence, Washington, D.C.: Brookings Institution Press.
[8] TheRANDCorporation (2017) ‘Hypersonic Missile Nonproliferation,’ available from: https://www.youtube.com/watch?v=FyUTNRIuAqc&t=360s.
[9] Farabaugh, Bryce (2019) ‘Bad Idea: Integrating Artificial Intelligence with Nuclear Command, Control, and Communications,’ Defense360, available from: https://defense360.csis.org/bad-idea-integrating-artificial-intelligence-with-nuclear-command-control-and-communications/.
[10] Ibid.
[11] Fitzpatrick, Mark (2019) ‘Artificial Intelligence and Nuclear Command and Control,’ The International Institute for Strategic Studies (IISS) (blog), available from: https://www.iiss.org/blogs/survival-blog/2019/04/artificial-intelligence-nuclear-strategic-stability.
[12] Henry A. (2018) ‘How the Enlightenment Ends,’ The Atlantic, available from: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
[13] West, Darrell M and Allen, John R (2021) ‘It Is Time to Negotiate Global Treaties on Artificial Intelligence,’ Brookings, available from: https://www.brookings.edu/blog/techtank/2021/03/24/it-is-time-to-negotiate-global-treaties-on-artificial-intelligence/.
[14] Ibid.
Reference list
Farabaugh, Bryce. “Bad Idea: Integrating Artificial Intelligence with Nuclear Command, Control, and Communications.” Defense360, December 3, 2019. https://defense360.csis.org/bad-idea-integrating-artificial-intelligence-with-nuclear-command-control-and-communications/.
Fitzpatrick, Mark. “Artificial Intelligence and Nuclear Command and Control.” The International Institute for Strategic Studies (IISS) (blog), April 16, 2019. https://www.iiss.org/blogs/survival-blog/2019/04/artificial-intelligence-nuclear-strategic-stability.
Gentile, Gian P, Michael Robert Shurkin, Alexandra T Evans, GriséMichelle, Mark Hvizda, Rebecca Jensen, International Security And Defense Policy Center, and Rand Corporation. A History of the Third Offset, 2014-2018. Santa Monica, Calif.: Rand Corporation, 2021.
Kassab, Hanna Samir. “In Search of Cyber Stability: International Relations, Mutually Assured Destruction and the Age of Cyber Warfare.” In Cyberspace and International Relations. Theory, Prospect and Challenges. Springer, 2014.
Kissinger, Henry A. “How the Enlightenment Ends.” The Atlantic, May 15, 2018. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
Lindborg, K. “Hypersonic Missiles May Be Unstoppable. Is Society Ready?” Christian Science Monitor, March 31, 2020. https://www.csmonitor.com/USA/Military/2020/0331/Hypersonic-missiles-may-be-unstoppable.-Is-society-ready.
TheRANDCorporation. “Hypersonic Missile Nonproliferation.” www.youtube.com, September 28, 2017. https://www.youtube.com/watch?v=FyUTNRIuAqc&t=360s.
West, Darrell M, and John R Allen. Turning Point : Policymaking in the Era of Artificial Intelligence. Washington, D.C.: Brookings Institution Press, 2020.
West, John R. Allen and Darrell M. “It Is Time to Negotiate Global Treaties on Artificial Intelligence.” Brookings, March 24, 2021. https://www.brookings.edu/blog/techtank/2021/03/24/it-is-time-to-negotiate-global-treaties-on-artificial-intelligence/.