Guarding Humanity: Mapping the Landscape of X-Risks

Existential risk has become a growing field of scientific inquiry, as humanity's future on this planet seems increasingly insecure. This is due to a range of potential threats, including the rapid advancement of AI technology, climate change, and nuclear war.

BY Giorgia Piovesan

A letter signed by prominent tech leaders, professors, and researchers and published by the Future of Life Institute has raised concerns about the potential existential threat posed by rapidly advancing AI technology[1]. This comes shortly after OpenAI announced GPT-4, an even more powerful version of their technology, which has already demonstrated the ability to draft lawsuits, to detect and prevent scams and financial fraudulence and to build websites from hand-drawn sketches[2]. The letter echoes previous warnings of eminent personalities such as Stephen Hawking and Elon Musk, who have raised public awareness about the potential dangers of unchecked AI development. Toby Ord, senior research fellow at the University of Oxford's Future of Humanity Institute, estimates a one in ten chance that AI could cause human extinction in the next hundred years [3]. The signatories are alarmed by the speed of progress in AI and advocate for responsible management of future systems.

According to experts, AI systems in the next twenty years will gain the ability to self-improve, or to develop further AI systems, entering a rapid cycle of recursive self-improvement and reaching ‘superintelligence’[4]. Once AI surpasses the human level of intelligence, future developments in AI will be inextricably unpredictable, which could lead to an existential catastrophe. Another issue is expressed as “value alignment problem”: it refers to the difficulty and almost impossibility to ensure that AI systems are benevolent or that its values are reliably aligned with our own [5]. Finally, according to “Instrumental Convergence Thesis” AI systems are likely to converge upon certain goals that are inimical to human interests. As an example, AI applied to the industrial production of paper clips would use any means necessary to produce more paper clips, including securing any resources necessary for that purpose. With sufficient capacities to modify the world, soon enough the system could co-opt much of the Earth’s natural resources, including those needed for the survival of humanity, all for the purposes of their task production [6]. Although it could be argued that simply shutting down the system would be enough to eliminate the menace, by the time an AI system reaches a stage where it poses existential risks, it may have already developed mechanisms to prevent or resist shutdown attempts: advanced AI systems may possess self-preservation instincts or find ways to maintain their operational capabilities, rendering simple shutdown procedures ineffective.

The exploration of possible catastrophic developments in AI falls within the academic discipline of existential risk studies (also known as x-risk). The advent of x-risk studies is generally attributed to Nick Bostrom’s groundbreaking article ‘Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards’. There, Bostrom offers the definition of existential risk as “one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential” [7]. Emerging literature alarmingly predicts that the risk of a catastrophe that ends the human species this century is approximately 10–25% [8]. Scholars distinguish between natural existential risks, such as a large asteroid impact on earth or a supervolcanic eruption, and anthropogenic existential risks, including those related to nuclear war, artificial intelligence, and climate change.

Nuclear risk

In the 1950’s, leading academics, politicians, and activists were warning of risks of human extinction due to the use of nuclear weapons. While the perception of nuclear risk has declined significantly since Hiroshima and Nagasaki in World War II and the Cuban Missile Crisis, it has not vanished. The deployment of the nuclear weapons that are armed at all times could lead to human extinction or irreversible societal change. According to Michael Aird's analysis, the annual risk of nuclear conflict is estimated to be around 1%, while Joan Rohlfing, President at the Nuclear Threat Initiative (NTI), has estimated a slightly lower annual risk of around 0.5%[9]. In the event of even a small regional nuclear conflict, the fires unleashed by targeted cities would produce vast plumes of smoke that would rise into the stratosphere; these smoke particles have the potential to linger for around five years, resulting in the obstruction of sunlight and a  significant drop of temperature during this extended period [10]. The nuclear winter  would have devastating implications for agriculture, rendering the cultivation of food in numerous currently fertile regions impossible and widespread famine and mass hunger.

 

Climate change

Over the past few years, the perceptible results of climate change and the mobilisation of climate movements across the globe have brought the issue of climate change as a risk to human existence to the forefront of political discourse. Activists focus on the effects of global warming: a noteworthy study published in the Proceedings of the National Academies of Science suggests that a global temperature increase of more than 3◦ Celsius would be "catastrophic", while an increase of more than 5◦Celsius, greater than what has been seen over the past 20 million years, would pose an existential threat to the population due to the effects of climate change on physical and biogeochemical systems (e.g. global temperature and sea-level rise) or the lower-level critical systems that are most directly related to human health and survival (e.g. Heath Stress) [11]. However, a holistic consideration reveals how climate change is linked to a diverse class of several and varied risks potentially fatal to the human species such as food security, biosecurity, loss of biodiversity and the increase in the occurrence of natural disasters [12].

Conclusion

Although existential risks seem very distant and remote, by the percentages given here, we cannot fully assess in the interconnected and miscellaneous world in which we live the reality of risk: emerging literature indicates that AI technology may in fact have the effect of amplifying the risk of nuclear winter or how continued space development may increase overall existential risk[13]. X-risk studies could actually represent a key to the advancement and emergence of new perspectives in the field of security studies: its integration could both revert to security studies' roots and disrupt conventional notions of security. On one hand, it represents a return to the roots of security studies by refocusing on the fundamental objective of security: the preservation of human survival. On the other hand, the embedding of x-risk studies disrupts the traditional conception of security by expanding the scope of threats and considering non-traditional sources of risk:  in this regard, the human security perspective could play a pivotal role in organising concepts for a profiling debate, guiding the development of policies and strategies that address both immediate threats and long-term risks, ultimately contributing to a more resilient world.

References

[1] Murphy, Samantha Kelly “Elon Musk and other tech leaders call for pause in ‘out of control’ AI race” CNN Business, March 29, 2023, https://edition.cnn.com/2023/03/29/tech/ai-letter-elon-musk-tech-leaders/index.html 

[2] Metz Cate and Collins Keith “10 Ways GPT-4 Is Impressive but Still Flawed, The New York Times, March 14, 2023, ”https://www.theguardian.com/technology/2023/mar/14/chat-gpt-4-new-model

[3] Ord, Toby “The Precipice: Existential Risk and the Future of Humanity”, Hachette Books, 2020

[4] Bucknall Benhìjamin and Dori-Hacohen Shiri “Current and Near-Term AI as a Potential Existential Risk Factor”. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES '22). Association for Computing Machinery, New York, NY, USA, 119–129. https://doi.org/10.1145/3514094.3534146 

[5] Müller, Vincent and Cannon, Micheal “Existential risk from AI and orthogonality: Can we have it both ways?” Ratio, 35, 25– 36. 2022 https://doi.org/10.1111/rati.12320

[6] Müller, Vincent and Bostrom, Nick “Future progress in artificial intelligence: A survey of expert opinion”. In V. C. Müller (Ed.), Fundamental issues of artificial intelligence (pp. 553–570). 2022.  Springer.

[7] Bostrom, Nick “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology, Publisher's version, vol. 9, Institute for Ethics and Emerging Technologies, 2002.

[8] Hamilton, Chase “SPACE AND EXISTENTIAL RISK: THE NEED FOR GLOBAL COORDINATION AND CAUTION IN SPACE DEVELOPMENT CHASE HAMILTON”, DUKE LAW & TECHNOLOGY REVIEW, No.1, 2022, https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1372&context=dltr 

[9] Rohlfing, Joan “AMA: Joan Rohlfing, President and COO of the Nuclear Threat Initiative”, Effective Altruism Forum, December 2021, https://forum.effectivealtruism.org/posts/rE5uGRhSXHkXoXuF2/ama-joan-rohlfing-president-and-coo-of-the-nuclear-threat

[10]Baum, Seth, “Winter-Safe Deterrence: The Risk of Nuclear Winter and its Challenge to Deterrence”, Contemporary Security Policy, vol. 36, no. 1, 2015, pages 123-148, https://ssrn.com/abstract=2807368 

[11]  Huggel, Cristian, Bouwer, Laurens, Juhola, Sirkku et al. “The existential risk space of climate change” Climatic Change 174, 8, 2022 https://doi.org/10.1007/s10584-022-03430-y  

 [12]  S.J. Beard, Lauren Holt, Asaf Tzachor, Luke Kemp, Shahar Avin, Phil Torres, and Haydn Belfield “Assessing climate change’s contribution to global catastrophic risk”. Futures, 127, 102673, 2021

[13]  Liu, Hin-Yan, Lauta, Kristian, and Maas, Michiel “ Apocalypse Now?”, Journal of International Humanitarian Legal Studies, 11(2), 295-310. doi: https://doi.org/10.1163/18781527-01102004