Deepfakes: The New Frontier in Political Disinformation

Deepfakes, a sophisticated form of artificial intelligence, are increasingly blurring the line between reality and fiction. This technology has serious implications for political disinformation, as it can be weaponised to manipulate public opinion and erode trust in the media. Deepfakes have already been used to target public figures, create divisive narratives, and compromise national security, raising concerns about their potential to undermine democratic processes and fuel social discord. The challenge of countering deepfakes is compounded by the rise of unfiltered content on social media platforms, where misinformation spreads rapidly. As the sophistication of deepfakes continues to advance, developing regulatory measures and reliable detection tools is essential to preserve the integrity of information in the digital age.

By Carmen Constantineanu

Introduction

There is nothing that persuades a person more than watching a video or audio recording of an event. Individuals trust what they see with their own eyes, but what would happen if our ability to distinguish fact from fiction was suddenly taken away by highly convincing, manipulated content which can be weaponised to blur the lines between reality and falsehood?

What are Deepfakes?

Deepfakes are a form of Artificial Intelligence known as deep learning in which groups of algorithms called neural networks learn to identify patterns and make rules by analysing large amounts of data. Deepfakes originate from a type of deep learning called generative adversarial networks, or GANs, where two algorithms compete with each other. One, called the generator, creates content based on real data (for example, making fake cat pictures from a database of real ones). The other, the discriminator, tries to detect the fake content. Because these algorithms keep training against each other, they quickly improve, allowing GANs to create highly realistic but fake audio and video [1]. There are large amounts of online video and audio footage readily available of political actors. When used as training data for GANs, users are able to create fabricated content of these public figures and disseminate them online without any clear signs that set them apart from real footage [2].

The average person may struggle to recognise when they are being misled by a deepfake. Studies show that people accurately identify deepfakes only about 50% of the time—essentially no better than random guessing. Detection becomes even harder when videos have smearing and blocky distortions from compression, which is often used on social media [3]. This renders deepfakes as potential threats, particularly in politics, where they can be used to manipulate narratives and mislead the public. 

Deepfakes and Political Disinformation

The primary concern lies with the possible use of deepfake technology to jeopardise national security by spreading political propaganda. It is especially problematic during election periods, where altered content is disseminated on the internet featuring misleading or false statements which have the potential to impact the public perception and voter’s decision [4]. In 2017, Moscow attempted a similar tactic, aimed at the campaign of French politician Emanuel Macron. Russian hackers released a cache of stolen documents, most of them having been doctored. Their efforts failed mainly because French media law prohibits election coverage in the 4 hours before the vote. Most countries, however, do not have a media blackout during voting, meaning deepfake content released before an election could spread quickly on the internet without enough time to debunk the information [1].

The use of deepfakes to spread divisive messages or exploit political or ethnic tensions is another major concern, as it can amplify existing conflicts within a state or community. Manipulating images and speeches to promote polarising narratives could intensify internal conflicts, complicate peace actions, or undermine precarious reconciliation efforts. The strategic use of such technologies can exacerbate political and social instability, eroding trust between ethnic groups or political factions [5]. Furthermore, deepfakes can be used to manipulate news on conflict developments and other incidents. A striking example is a manipulated video of President Volodymyr Zalenskyy in which he orders his soldiers to surrender their weapons and forfeit the fight against Russia [4].

Additionally, deepfakes can have a serious effect on the political reputation of public figures by depicting them in compromising settings or supporting ideas they would not otherwise. This type of manipulation can weaken their perceived leadership ability and erode public trust. A relevant example involves American politician Nancy Pelosi, who was a target of ridicule in 2019 after a deepfake of her appearing inebriated during one of her speeches made the rounds on social media [6]. In India, deepfakes were used against female journalists and politicians by adding their faces to fake pornographic videos that were then disseminated on the internet. The misuse of AI-generated content could easily be used to sexualise female politicians to compromise their credibility during electoral campaigns [7]. 

Challenges and further implications of deepfakes

Deepfakes pose a significant threat, particularly as distinguishing fact from fiction has become increasingly challenging. During the twentieth century, the flow of information was primarily managed by newspapers, magazines and television networks. Journalists adhered to strict professional guidelines to maintain the quality of content being published, and only a limited number of mass media outlets were able to disseminate large-scale information. However, in the past decade, individuals have turned to social media platforms such as X and Facebook as news sources, platforms which produce mainly unfiltered content. Users also tend to see and engage with viewpoints that align with their own, a product of the algorithms of these platforms, which create echo chambers.  In these environments, people tend to share information without checking its legitimacy, rendering the content more credible when continuously shared [1]. 

One significant implication of the increased uncertainty caused by deepfakes is the decrease in trust in political news, specifically on social media. Globally, trust in the news is steadily declining, and deepfakes may generate a belief among citizens that it is impossible to establish a reliable ground for truth [8]. This uncertainty about what is true and false has become a key objective of state-sponsored propaganda, with Pomerantsev (2015) noting that the goal is to "trash the information space" so that audiences abandon their search for truth amid the chaos [9]. 

One technological remedy against disinformation is an approach called “digital provenance”. It involves watermarking or making a stamp in the metadata of a video, audio or picture at the moment of creation, meaning it would have a digital mark stating that it is the authentic content, and could be used to compare and identify an altered copy. In practice, however, things would not be as simple. Firstly, these tools would need to be installed on a wide range of devices, which manufacturers are unlikely to do without legal requirements or proven demand. Secondly, making authentication mandatory for uploading content to major platforms like YouTube or Instagram is unrealistic, as platforms fear losing users to competitors that allow unauthenticated content [10]. A second remedy involves using forensic technology. In 2018, computer scientists working at the University of Albany and Dartmouth created a program that captures abnormal eyelid movements in deepfake videos. However, with the rapid advance of technology, deepfakes will quickly improve their patterns and create more and more realistic videos [11].

Conclusion

Deepfakes pose a growing threat in the current digital climate, making it increasingly difficult to distinguish what is real and what is not. Its misuse goes beyond simple disinformation and has the power to change political perceptions, personal reputations, and individual’s trust in society. The weaponising of deepfakes can have serious effects on the perceived reality of the world, making regulatory instruments and tools a priority for future technological development.

References

[1] Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics.” Foreign Affairs 89, no. 1 (2019): 147–55. https://heinonline.org/HOL/LandingPage?handle=hein.journals/fora98&div=18&id=&page=.

[2] Vaccari, Cristian, and Andrew Chadwick. “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.” Social Media + Society 6, no. 1 (January 1, 2020): 205630512090340. https://doi.org/10.1177/2056305120903408

[3] Rössler, Andreas, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. “FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces.” arXiv (Cornell University), January 1, 2018. https://doi.org/10.48550/arxiv.1803.09179

[4] Battista, Daniele. “Political Communication in the Age of Artificial Intelligence: An Overview of Deepfakes and Their Implications.” Questa Soft, 2024. https://www.ceeol.com/search/article-detail?id=1251211

[5] Dobber, Tom, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes De Vreese. “Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?” The International Journal of Press/Politics 26, no. 1 (July 25, 2020): 69–91. https://doi.org/10.1177/1940161220944364

[6] Cole, Samantha. “Americans Don’t Need Deepfakes to Believe Lies About Nancy Pelosi.” VICE, May 24, 2019. https://www.vice.com/en/article/deepfakes-nancy-pelosi-fake-video-trump-tweet/

[7] Tomić, Zoran, Tomislav Damnjanović, and Ivan Tomić. “ARTIFICIAL INTELLIGENCE IN POLITICAL CAMPAIGNS.” South Eastern European Journal of Communication 5, no. 2 (December 1, 2023): 17–28. https://doi.org/10.47960/2712-0457.2.5.17

[8] Hanitzsch, Thomas, Arjen Van Dalen, and Nina Steindl. “Caught in the Nexus: A Comparative and Longitudinal Analysis of Public Trust in the Press.” The International Journal of Press/Politics 23, no. 1 (November 15, 2017): 3–23. https://doi.org/10.1177/1940161217740695

[9] Pomerantsev, Peter. “Inside Putin’s Information War.” StopFake, January 6, 2015. https://www.stopfake.org/en/inside-putin-s-information-war/

[10] Wang, Run, Felix Juefei-Xu, Meng Luo, Yang Liu, and Lina Wang. FakeTagger: Robust Safeguards Against DeepFake Dissemination via Provenance Tracking. Proceedings of the 29th ACM International Conference on Multimedia. Association for Computing Machinery, 2021. https://doi.org/10.1145/3474085.3475518

[11] University at Albany. “Tackling the DeepFake Detection Challenge,” n.d. https://www.albany.edu/cnse/news/2019-tackling-deepfake-detection-challenge