Election Security 2024: Information, Influence and Generative AI

Now that we’re in September 2024, the world is nearing the end of the global election super cycle. An incredible number of elections have taken place around the world. These elections have been democratic, undemocratic, chaotic and surprising. Physical violence and disruption related to elections has not been as prevalent as expected. Unsurprisingly, actions in the information and cyber domains continue to increase. While many important elections are complete, the U.S. Presidential election is the final major election in 2024, and activities in the cyber and information domains may prove to be the most consequential yet. 

The Cyber Domain: A Dual-Track Threat 

The cyber domain poses a significant dual-track threat to election security in 2024, affecting both the integrity of the electoral process and the cognitive perceptions of the electorate. This threat is particularly acute because cyber operations not only involve the technical manipulation of election infrastructure—such as hacking into voting systems or disrupting electoral websites—but also the dissemination of disinformation through digital platforms, which can alter public perception and undermine trust in democratic processes. AI-generated content (AIGC) is already changing the speed at which actors in the information domain can create content. It is highly likely that very soon AIGC will change the type of content malicious actors can put into the information environment. As highlighted by the RAND Corporation, a "perfect storm" could arise from simultaneous cyber threats targeting the physical, human, and reputational assets essential to fair elections.

AIGC: Increasing Risk from Speed of Information and Influence Campaigns

The increasing sophistication of cyber-enabled influence and information operations poses a newer, extremely dangerous threat to populations of citizens, customers and employees. Artificial Intelligence Generated Content can be deployed to conduct social engineering attacks: influencing opinions, generating sympathy or disgust, affecting the will of people in organizations and creating chaos generally. Disruptions created by AIGC can waste vast amounts of organizational time and resources, with decreasing cost to the attacker as the technology advances. 

The NYU Stern Center for Business and Human Rights emphasizes that one of the greatest dangers in 2024 comes from the spread of disinformation via social media platforms, where false and hateful content can easily proliferate, particularly in a landscape where major tech companies have scaled back their election integrity efforts​. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warns that the erosion of public trust, fueled by both domestic and foreign cyber operations, is a critical vulnerability. This erosion could be exacerbated by repeated exposure to misleading narratives online, which has the potential to alienate voters and destabilize the electoral process.

As election security expert Walter Olson from the Cato Institute aptly notes, "When many people feel that the electoral process is rigged or not truly democratic, the risk increases that some individuals or groups will attempt to disrupt or subvert it.” This underscores the importance of securing the technical aspects of election infrastructure and addressing the cognitive impacts of the information carried across the cyber domain. To better understand this threat, let’s look at how AIGC might manifest itself in the hands of a threat actor.

Deepfake Videos

Deepfake videos can create highly realistic but entirely fabricated footage of political candidates, making it appear as though they have said or done something they haven't. This could be used to spread misinformation or discredit a candidate. A deepfake video might show a candidate making inflammatory statements that could alienate key voter groups, potentially swinging the election results. This has already happened in the 2024 election. The Brookings Institute also notes that there could be a “liars dividend” to candidates who claim to be the victims of deepfake attacks.

Targeted Propaganda

AI can analyze vast amounts of data to identify voter groups that are susceptible to certain messages. This allows for the creation of hyper-targeted propaganda that plays on specific fears, biases or concerns of these groups. During the election cycle, AI-generated ads or social media campaigns could be designed to exploit divisive issues, such as immigration or healthcare, to sway voters in key swing states. Last year, campaigns in New York were found to be using AI-generated voices to speak to voters in other languages

AI-Generated Misinformation

AI can generate large volumes of text that appear credible, making it easier to flood social media and other platforms with false or misleading information. This misinformation can be tailored to specific demographics, amplifying its impact. AI-generated articles or social media posts could falsely claim that a candidate is involved in illegal activities or health scandals, affecting voter perception. Google reported that the highest occurrence categories of AIGC misuse are in impersonation, falsification, and scaling and amplification.

Automated Bots and Trolls

Similar to the above tactic, AI-driven bots can be deployed to amplify certain messages or disrupt online discussions. These bots can create the illusion of widespread support or opposition to a candidate, skewing public perception. A bot network could flood social media with support for a fringe candidate or obscure valid criticisms of a major candidate, distorting the public discourse.

Synthetic Media for Manipulated Narratives

AI-generated images, audio and videos (beyond just deepfakes) can be used to create compelling but false narratives. This synthetic media can be part of broader disinformation campaigns. AI-generated images could depict false evidence of illegal activity, such as ballot tampering, which could be circulated to delegitimize the electoral process. Microsoft has reported on the Chinese government using forms of synthetic media to influence both the Taiwanese and U.S. elections. Open AI found that Iranian groups attempted the same type of operation in August 2024.

Potential Effects During Election Season

In a scenario where external actors like Russia, China or Iran interfere in the 2024 U.S. elections, the resultant disinformation campaigns and cyberattacks could significantly destabilize civil society. These actions might ignite civil disturbances as misinformation spreads rapidly, leading to protests, violence and a deepening distrust in governmental institutions. Such societal unrest could have a ripple effect on businesses and industries, causing economic instability. Disruptions in social order can lead to interruptions in supply chains, a decline in consumer confidence, and increased operational risks for companies, particularly in sectors reliant on public trust or consumer spending​. In extreme cases, prolonged civil unrest could lead to government-imposed restrictions or curfews, which would further disrupt business activities and economic stability.

Measures You Can Take to Protect Your Interests

Risk Assessment

As always, preparation should begin with a proper risk assessment. The MITRE ATT&CK® matrix is a comprehensive, globally accessible knowledge base of adversary tactics and techniques based on real-world observations. In particular, companies should focus on social engineering techniques that adversaries might use. All the AIGC tactics discussed above rely on the use of victims' feelings, ideas and biases; these are key parts of any social engineering cyber attack.

Education and Training

Regular training on cybersecurity and the risks of AI-generated content is essential. Employees should be able to recognize phishing attempts, social engineering and other tactics that might be used in conjunction with AIGC. Developing a comprehensive crisis communications plan is essential. This plan should include strategies for addressing and mitigating the impact of AIGC-driven misinformation, should it occur. Don’t forget to evaluate existing communications technology. A multi-modal system that enables the sending of alerts via email, SMS, app-based push notifications, desktop alerts and voice makes it easier to keep people informed, combat misinformation and provide any necessary instructions.

Conducting tabletop exercises that simulate AIGC attacks can prepare companies to respond effectively. These exercises should involve key stakeholders across the organization, including communications, legal, IT and executive leadership. Engaging in threat modeling specific to AIGC can help companies anticipate potential attack vectors and develop tailored responses.

Monitoring and Detection

While reinforcing cyber defenses is always worthwhile, with the AIGC threat, the focus should be on enhancing monitoring and detection. Companies should invest in AI-driven tools to detect deepfakes, misinformation and other AI-generated content. These tools can help identify manipulated media and false information before it spreads widely. Establishing a robust system for real-time monitoring of social media, news outlets and other online platforms is crucial. This allows companies to quickly identify and respond to false narratives or misleading content that could harm their brand. It’s important to also consider the impact AIGC can have on risk intelligence collection. Reduce noise and time to detection by leveraging AI-powered risk intelligence that monitors vetted sources.

The mission can feel daunting and the path forward unclear. If you’d like to continue this discussion, provide feedback or are looking for assistance, OnSolve by Crisis24 is here to help.

Matt Rasmussen

Matt Rasmussen is a 23-year U.S. Army Veteran who currently serves as an Assistant Professor and Course Director at the U.S. Army War College. Matt’s most recent operational assignments were first as an infantry battalion commander and then as a hand-selected combat advisor battalion commander. During his Army career, Matt has served at every operational echelon from platoon to division, and deployed to Iraq and Afghanistan four times.