AI

Deepfakes: The New Frontier in Political Disinformation

Deepfakes: The New Frontier in Political Disinformation

Deepfakes, a sophisticated form of artificial intelligence, are increasingly blurring the line between reality and fiction. This technology has serious implications for political disinformation, as it can be weaponised to manipulate public opinion and erode trust in the media. Deepfakes have already been used to target public figures, create divisive narratives, and compromise national security, raising concerns about their potential to undermine democratic processes and fuel social discord. The challenge of countering deepfakes is compounded by the rise of unfiltered content on social media platforms, where misinformation spreads rapidly. As the sophistication of deepfakes continues to advance, developing regulatory measures and reliable detection tools is essential to preserve the integrity of information in the digital age.

Guarding Humanity: Mapping the Landscape of X-Risks

Guarding Humanity: Mapping the Landscape of X-Risks

Existential risk has become a growing field of scientific inquiry, as humanity's future on this planet seems increasingly insecure. This is due to a range of potential threats, including the rapid advancement of AI technology, climate change, and nuclear war.

Artificial Intelligence: A Game Changer for All-Source Intelligence Activities?

Artificial Intelligence: A Game Changer for All-Source Intelligence Activities?

Intelligence agencies today have to collect and analyse intelligence on numerous individuals, state and non-state actors in an environment of many complex hybrid threats and overlapping interests. Additionally, there is a glut of data from several sources that need to be processed quickly and accurately. Artificial Intelligence (AI) presents a viable way to maximise the value of the All-Source intelligence products. Despite all the promise AI holds for the Intelligence Community, the technology is far from perfect.

Plenty of Phish in the Sea: How Artificial Intelligence is Transforming the Oldest form of Cybercrime

Plenty of Phish in the Sea: How Artificial Intelligence is Transforming the Oldest form of Cybercrime

Artificial intelligence and machine learning (AI/ML) have seamlessly and fundamentally transformed the way we interact with digital technology [1]. Dual-use applications, such as the case of AI/ML, can be quickly exploited by cybercriminal activities. One example is phishing, one the first types of cybercrime. While phishing in today’s world is still perceived as an outdated scam, AI/ML advancements have paved the way for more convincing phishing attacks and the wider use of hyper-targeted spear-phishing. This article will focus on the AI/ML-enabled transformation of phishing and spear-phishing and the consequences it poses for the cybersecurity environment.

The Augmentative Effect of AI in The Open Source Intelligence Cycle

The Augmentative Effect of AI in The Open Source Intelligence Cycle

Artificial Intelligence (AI) has become one of the most polarising topics and eye-catching terms in our contemporary lexicon; seen as either a paragon of modern technology or as a harbinger of humankind’s technological doom, depending on who you ask. From pocket AIs such as Siri to self educating AIs in Silicon Valley, AI has permeated into virtually all facets of life.

Artificial intelligence and nuclear warfare. Is Doomsday closer? - Cyber Security and AI Series

Artificial intelligence and nuclear warfare. Is Doomsday closer? - Cyber Security and AI Series

Artificial intelligence (AI) has the potential to radically change societies. By employing it in numerous fields, ranging from healthcare to the economy can improve humans’ lives. However, this revolutionary technology may cause disruptive imbalances in the military power relations between countries, especially in the field of nuclear stability. Although the development of AI-based defensive weapon systems might improve nuclear deterrence, incorporating artificial intelligence into nuclear offensive capabilities and command and control (C2) systems could accelerate escalation in crisis scenarios.