The 2016 U.S. presidential election is often cited as a watershed moment for digital disinformation campaigns – revealing the vulnerability of democratic processes to foreign interference and the spread of false information. But since then, elections across the globe have become prime targets for sophisticated influence operations and disinformation campaigns.
Over the past year, nation-state actors have actively sought to sway voter opinions and outcomes. Disinformation is not just about spreading lies, but about engineering doubt, eroding trust and fracturing societal cohesion through algorithmically precise narrative manipulation. These efforts were notably amplified using artificial intelligence (AI), allowing campaigns to spread false narratives efficiently. Countries such as China, Russia and Iran notably adapted their tactics to exploit U.S. societal divisions, using AI-enhanced tools to disseminate their messaging widely. Their efforts also involved leveraging social media platforms to spread disinformation.
Looking ahead, disinformation campaigns are expected to continue their influence not only throughout the election process but also into the post-election period, particularly as Donald Trump prepares for his inauguration. With this in mind, let’s examine the various disinformation campaigns which ran over the course of the lead-up to the 2024 U.S. presidential election and how these campaigns might evolve in the critical months to come.
China’s Spamouflage influence operation
The Spamouflage influence operation, active since at least 2019 and widely attributed to the Chinese government, has been evolving its tactics to align with geopolitical objectives. Known alternatively as Dragonbridge or Storm-1376, the campaign began focusing on the U.S. presidential election in mid-2023.
Graphika researchers uncovered the operation’s extensive use of social media accounts impersonating U.S. citizens, soldiers and advocacy groups. These accounts strategically distributed AI-generated content targeting both Democratic and Republican candidates, aiming to undermine the electoral process and exacerbate social tensions around an array of topical issues to sway voters, including gun control, homelessness, racial inequality and the Israel-Hamas conflict.
The campaign targeted prominent political figures, including Joe Biden, Donald Trump, Kamala Harris and Marco Rubio. Clemson University researchers identified that Spamouflage has been targeting Rubio with fake news since his re-election bid in 2022. Microsoft also observed Spamouflage using AI-generated news broadcasts and AI-manipulated imagery to fuel conspiracy theories. The tactics targeting Rubio have notably evolved since 2022, with hacked accounts and higher-quality disinformation spread across platforms like Medium and social media sites X and TikTok.
While primarily focusing on presidential candidates, Spamouflage demonstrated a willingness to target a broader range of political figures, adapting its strategies to maximise potential political and social disruption.
Iran’s hack-and-leak tactics
In May 2024, Iranian-linked cyber actors infiltrated personal accounts of Donald Trump’s presidential campaign, obtaining non-public campaign documents and emails. By June 2024, these hackers escalated their operation, initiating a ‘hack-and-leak’ strategy using the persona ‘Robert’ to distribute stolen materials to media outlets and Biden campaign-adjacent individuals.
In August 2024, POLITICO reported receiving anonymous emails containing internal Trump campaign documents, which the campaign confirmed were the result of a hack by foreign sources. A Microsoft report detailed the Iranian government-linked cyber activities. Two types of activity were observed, the first of which concerned influence campaigns to stoke controversy and sway voters, and the second aiming to gain intelligence on political campaigns to influence future elections. Multiple Iranian Revolutionary Guard Corps (IRGC)-affiliated actors with links to election interference were identified, including Peach Sandstorm, Lemon Sandstorm, Mint Sandstorm, and Cotton Sandstorm. Google researchers corroborated these findings, attributing the campaign to APT42.
The U.S. Department of Justice responded in September 2024 by indicting three Iranian nationals employed by the Iranian Revolutionary Guard Corps (IRGC). The indictments alleged a coordinated effort to undermine U.S. electoral processes by sowing discord and gathering intelligence. Despite legal action, the ‘Robert’ persona continued to contact news organisations, with some publications verifying and publishing portions of the leaked documents.
Russia’s Doppelganger campaign
The Russian influence campaign Doppelganger, which originated in 2022 during the Russia-Ukraine war, has continuously adapted its strategies. In 2024, the campaign pivoted to target global elections, initially focusing on the European Parliament election through propaganda and disinformation about socio-economic and geopolitical issues.
Doppelganger employs diverse disinformation techniques, including typosquatted domains mimicking news websites AI-generated social media content and deep fake videos to spread its messages, while also amplifying messages from another Russia-linked operation, CopyCop. By September 2024, the U.S. Department of Justice seized 32 internet domains linked to the campaign and indicted two RT employees for covertly creating and distributing election-related content.
Previous research by Qurium revealed Doppelganger’s connections to European cybercriminals and advertisement networks. The campaign has since adapted, attempting to discredit disinformation experts like Eliot Higgins and Christo Grozev to deflect accusations of Russian involvement. This shift may be a response to the DOJ’s September affidavit implicating the Social Design Agency in operating the Doppelganger campaign.
Key trends in election disinformation
State-sponsored influence campaigns are continuously adapting their techniques, with AI becoming a critical tool in disinformation strategies. The U.S. Foreign Malign Influence Centre predicts ongoing use of AI-generated social media content, exemplified by Russia-linked deepfake videos targeting political figures and Iranian hacker groups using AI to generate election-related narratives.
These campaigns demonstrate remarkable agility in exploiting current events. For example, the assassination attempt on Donald Trump quickly generated numerous conspiracy theories, while other operations leveraged events like hurricanes and university protests to spread divisive messaging.
Ultimately, these influence operations aim to erode public trust and create social discord, but the impact of these campaigns is wide ranging. Cybersecurity risks like phishing attacks are multiplied, as disinformation campaigns can be used to lure employees into clicking on malicious links or downloading malware. Disinformation campaigns can spread false information about a company, and even lead to stock market volatility.
For organisations, staying ahead requires robust threat intelligence to anticipate and mitigate the ripple effects of disinformation. By seeing the bigger picture, businesses can effectively plan and address emerging risks. The 2024 U.S. presidential election demonstrated that in an era of AI and global connectivity, disinformation not only threatens public trust and political stability, but also corporate security and economic markets worldwide.
About the Author
Hannah Baumgaertner is Head of Research at Silobreaker. Hannah can be reached at our company website www.silobreaker.com