
Researcher,
Institute for Peace Research and Security Policy at the University of Hamburg (IFSH)
Generative artificial intelligence is changing the world of politics and the information ecosystem. It accelerates the content creation and its exchange, reduces costs, but at the same time enhances the multiplication of disinformation. Synthetic media in visual or audio form, known as deep fakes, are commonly associated with electoral manipulation, but so far, the threats result from the imagined potential of the technology, not the direct disruption of electoral processes. The fundamental problem with AI-powered disinformation is quantifying its ultimate influence and understanding that synthetic media do not have to manipulate recipients directly to still exert a subtle influence by shaping cognitive associations, or decision making processes – writes Mateusz Łabuz, researcher from the Institute for Peace Research and Security Policy at the University of Hamburg and non-resident fellow in the Future Shift Labs.
Deep fakes as a tool of disinformation
Deep fakes can be defined as AI-generated or AI-manipulated audio, image, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful. This definition, adopted in the EU Artificial Intelligence Act, reflects their nature quite well. Deep fakes are intended to mislead recipients and are therefore often classified as a disinformation tool.
The numerous negative uses of deep fakes include bypassing biometric security, conducting financial scams, or creating non-consensual intimate content. However, a key issue discussed in the literature is disinformation. Attention is drawn primarily to the potential of deep fakes for political manipulation, especially the discrediting of political opponents in the pre-election period. Surprisingly, it seems that in the case of electoral processes, despite repeated warnings and almost hysterical reactions, deep fakes have not led to any breakthrough in 2023 and 2024. However, the manipulation potential of deep fakes reaches much further than election results, and a superficial evaluation of research results may create a false impression, or even give us a false sense of security.
Elections in danger? Not directly. At least for now.
First, it is necessary to verify whether deep fakes have any influence on election results at all. At the moment, quantifying the ultimate influence of deep fakes on election preferences seems methodologically unattainable. Under simplified experimental conditions, it was possible to determine whether any of the deep fakes that went to mainstream media, and thus reached a significant number of recipients, disrupted the elections. Together with Dr. Christopher Nehring, we analyzed media reports on 85 election campaigns (as of the beginning of 2024), which either took place in 2023 or were just starting. Only in 11 cases were we able to identify attempts to use deep fakes for electoral manipulation purposes that reached the mainstream media. In none of the analyzed cases did deep fakes significantly influence the election results.
At the end of 2024, we did another round of research. The result was the same. Again, we did not detect any significant breakdown of democratic processes directly caused by deepfakes. The report by Future Shift Labs entitled “The Pervasive Influence of AI on Global Political Campaigns 2024” showed that AI-powered disinformation was gaining momentum, but deep fakes have not become a game-changer again. This must be surprising in light of expert predictions. However, it would be reckless to underestimate the threats, which could at any time translate into a significant change in voting preferences.
The risk of uncertainty
First and foremost, selected cases should be noted as they might herald specific threats to the integrity of the election processes in the future. In the 2023 presidential election in Argentina, the information space was flooded with deep fakes produced by both candidates, and the public, apparently encouraged by the actions of politicians, began to experiment with AI on its own. In 2023 in Turkey, targeted deep porn led to the withdrawal of a candidate from the presidential election. In Slovakia, a discrediting attack was used against one of the leading candidates during the so-called “decisional checkpoint,” when there is too little time (in this case, 48 hours before the parliamentary election) to deny false information.
Although in none of the above-mentioned cases did we record a significant disruption of the final results, we cannot be sure whether they would be the same without the appearance of deep fakes. There is still no specific methodology for measuring the impact of deep fakes on political processes, especially since they are only an element of the variety of instruments used by disinformation actors. This is a significant limitation of our capabilities to assess the outcomes of misusing deep fakes.
Shaping attitudes
Moreover, deep fakes do not necessarily have to appear in the context of elections to play a role in shaping political and social attitudes. It is safe to conclude that there is a huge potential for psychological influence in deep fake technology. In fact, voters’ perceptions and fears about deep fakes might lead to situations where malign actors do not necessarily need to use a deep fake but simply invoke the possibility that a certain piece of information might be a deep fake. And there is no shortage of such cases – the authenticity of real media is increasingly questioned, which allows for dismissing the unwanted real content as falsified.
Another element is the subtle shaping of voter preferences, which may concern not only specific choices but broader attitudes towards selected values. Attacks on specific individuals may be understood as attacks on auxiliary subjects aimed at undermining trust in democracy, institutions, or authorities. Deep fakes, in this case, can easily become a form of anti-democratic expression, which has a political dimension.
Epistemic apocalypse, or the self-fulfilling prophecy of doomsayers?
One of the recurring narratives, with its sources deeply rooted in philosophical and journalistic discourse, heralds the arrival of an epistemic apocalypse related to the accumulation of AI-generated content indistinguishable from real materials. This, in turn, would result in a disruption of the epistemic value of the content, blurring the line between truth and falsehood and, consequently, undermining public trust in the media and information. Some of these predictions have already begun to come true – the recorded decline in trust in the media and the growing sense of uncertainty as to the authenticity of information threaten to further erode the fundamental paradigm of “seeing is believing” that has accompanied humanity for centuries.
One may plausibly argue that apocalyptic narratives rightly draw attention to an important problem, but by dazzling with dystopian visions, they enhance the processes they describe and may even act as a self-fulfilling prophecy, creating a threat of declining trust in democratic processes.
For those reasons, next to quantity and a cumulative effect of their occurrence in the information space, one of the biggest threats posed by deep fakes is their psychological effect. Voters, politicians, and journalists are already confused and uncertain about the authenticity of information, which is partly a consequence of fake news and disinformation campaigns. The mere fear of not being able to detect and distinguish deep fakes from authentic content might lead to altering voters’ behavior due to insecurity. Moreover, it may undermine basic trust in the quality and integrity of democratic processes. When everyone is talking about how elections can be rigged using AI-generated media, why should they trust the final results?
Except that the observed strategies of disinformation actors are often based on easily recognizable AI-powered creations, which are not designed to hyper-realistically imitate reality, but to create specific cognitive associations. This is particularly interesting in the context of meme creations or various AI-garbage content that floods the Internet and is incredibly easy to create. This is another conclusion from our research – AI finds numerous applications in political campaigning that do not necessarily have to be based on quality. In this case, quantity is more important.
The “epistemic apocalypse” predicted in the context of deep fakes may therefore take a different form and be related to bombarding human audiences with content that distorts the perception of reality. These distortions, often based on cognitive bias and the heuristics (simplifications) that guide us in making decisions, have significant potential to influence our decisions. And at the same time, to shape political preferences. Through the triviality of forms of communication, often based on memes and a sense of humor, our cognitive immunity system is constantly being attacked, and caution is naturally suppressed.
Undermined trust to undermine democracy?
Our findings do not seek to suggest that deep fakes does not pose any direct threats to the outcomes of elections. Indicated cases, especially of attacking decisional checkpoints, should be analyzed in detail, as they highlight the possible strategy that might be applied to swing the outcome of elections directly. The resilience of democratic systems will be of key significance, as an erosion of trust may increase the probability of successful, direct attacks. In this context, the gradual degradation and loss of epistemic value of audio and visual materials drive some of these negative processes. The previous two to three years should give us food for thought and show us where to look for vulnerabilities. Especially since actors of disinformation have been testing various strategies, creatively using the changing ecosystem of information exchange.
The research, public discourse, and AI-media literacy efforts should probably shift away from a focus on deep fake technology towards the human interactions with AI-generated content, as well as social responses to deep fakes and low-quality materials. At the same time, journalistic discourse should shape public debate in a responsible way, without dazzling with apocalyptic visions, but contributing to strengthening awareness and thus social resilience, which is key to democracy.
Little by little, deep fakes and AI-generated content attack the very existence of truth, authenticity, but also facts and logic. That is why we should worry not only about election integrity but also about the basic trust in democracy, and maybe even a sense of realism, and above all, about our ability to make informed decisions based on facts and not on prejudices instilled in us.
-Author is Researcher at Institute for Peace Research and Security Policy at the University of Hamburg (IFSH)