Understanding the Threats to U.S. Election Security in 2024
from Diamonstein-Spielvogel Project on the Future of Democracy
from Diamonstein-Spielvogel Project on the Future of Democracy

Understanding the Threats to U.S. Election Security in 2024

Responders pull out the burning contents of a ballot box after it was set on fire in Vancouver, Washington, in a still image from video.
Responders pull out the burning contents of a ballot box after it was set on fire in Vancouver, Washington, in a still image from video. Evan Bell/ABC Affiliate KATU/Reuters

Despite widespread concern that foreign interference and generative AI would pose major threats to the 2024 election, the greatest risks emanate from rising domestic extremists and diminishing domestic trust.

October 31, 2024 3:08 pm (EST)

Responders pull out the burning contents of a ballot box after it was set on fire in Vancouver, Washington, in a still image from video.
Responders pull out the burning contents of a ballot box after it was set on fire in Vancouver, Washington, in a still image from video. Evan Bell/ABC Affiliate KATU/Reuters
Expert Brief
CFR scholars provide expert analysis and commentary on international issues.

Kat Duffy is a senior fellow for digital and cyberspace policy at the Council on Foreign Relations. Jacob Ware is a research fellow at the Council on Foreign Relations. 

More From Our Experts

With less than a week before the 2024 presidential election, physical and digital threats to U.S. election infrastructure remain a risk. Already, actors hostile to the functioning of liberal democracy have worked to disrupt the free exercise of the United States’ upcoming election, perhaps never more evidently than an assassination attempt that narrowly avoided killing former President and Republican candidate Donald Trump. However, it is vital that such threats are properly contextualized to avoid either exaggerating or downplaying the challenges U.S. democracy faces. 

Are there security concerns heading into Election Day on November 5? 

More on:

Election 2024

United States

Homeland Security

Democracy

Terrorism and Counterterrorism

The United States is in a heightened threat environment heading into Election Day, with multiple extremist factions threatening to disrupt the electoral process. Two prominent assassination attempts on former President Trump have occurred against a backdrop of myriad disrupted plots and a record high number of threats to public officials, as violent political rhetoric raises the stakes. Both foreign adversaries and Salafi-jihadist extremists have sought to take advantage of this fractious moment by inspiring or launching acts of violence in the United States.  

The days (or weeks) following the election could prove the most consequential, particularly if a clear winner has not emerged for the presidency. Such uncertainty gives conspiracy theories greater space to develop and circulate and can significantly increase political unrest or even violence within local communities. In 2020, for instance, vote-tallying centers in swing counties and cities—including Maricopa County in Arizona, Philadelphia, and Detroit—were targeted by extremist protests or terrorist plots. 

This year, violent far-right extremists likely pose the greatest threat, given the January 6, 2021, precedent of violence within a political transition, as well as violent rhetoric repeatedly issued by the Republican Party’s candidate. The Department of Homeland Security has even warned that the “heightened risk” of violence might include extremists attempting to sabotage ballots—a step that, if successful, could launch the country into a constitutional crisis. Recent arson attacks on ballot boxes in Vancouver, Washington, and Portland, Oregon, remain unsolved. Meanwhile, anti-government militia groups remain active on Facebook—the social media platform appears to have allowed its artificial intelligence (AI) systems to auto-generate pages for the groups—and continue to coordinate over the platform to conduct vigilante monitoring of ballot boxes to prevent “ballot stuffing,” a move better suited to intimidating voters than illuminating electoral irregularities.

More From Our Experts

The unrest could continue up to Inauguration Day on January 20, 2025, and even beyond.

How serious has outside interference been in this year’s election cycle? 

Despite reports of attempted interference [PDF] by China, Iran, and Russia, evidence indicates that the electoral process itself remains safe and secure from foreign interference as the election nears. Intelligence officials continue to provide regular public briefings to clarify threats, maintaining a trend of rapid declassification in the interest of filling information vacuums susceptible to conspiracy theories with credible, vetted information. 

More on:

Election 2024

United States

Homeland Security

Democracy

Terrorism and Counterterrorism

Although concern about a possible cyberattack on or near Election Day continues to loom large for state and local officials, domestic extremists seeking to undermine the election by intimidating electoral workers, engaging in political violence, or disrupting the voting process likely pose a more significant immediate threat than foreign interference. The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency has repeatedly emphasized the cybersecurity benefits of the United States’ decentralized elections infrastructure; the lack of any single point of attack makes a comprehensive cyberattack on U.S. elections much harder to achieve.  

2024 has been the year of elections. What lessons can the United States take from other countries about how threats to the information environment have evolved? 

As predicted, the super-cycle of global elections has provided valuable insights into how the U.S. electoral environment would evolve in 2024. Disinformation campaigns in Bangladesh, Serbia, South Africa, and Zimbabwe in 2023 and 2024 sought to undermine electoral trust in those countries by attacking institutions, intimidating officials, and fueling narratives of foreign interference or fraud. These campaigns particularly targeted women in prominent political roles.  

The rise of generative AI has added a new tool to these operations, which often involve bots, fake news sites, and collaboration with state-controlled media to distort the information landscape. Despite that, the impact on electoral outcomes appears to have remained minimal.  

The United States has experienced similar tactics, both through (seemingly) domestic campaigns of pro-Trump bots operating on X, the website formerly known as Twitter, and through foreign influence operations, such as the creation of fake news sites by Russia. Meanwhile, the U.S. public’s faith in the integrity of the 2024 election not only continues to split across party lines, but also demonstrates a concerningly low lack of trust overall in the electoral process. In one poll, only 1 in 5 Trump supporters say that they will strongly trust the electoral results if he loses, whereas 3 in 5 supporters of Democratic candidate and Vice President Kamala Harris say that they will trust the results regardless of outcome.  

What did authorities learn four years ago, and what still needs to be done? 

The main lesson drawn by the January 6 Committee report [PDF] was that U.S. law enforcement agencies, chiefly the U.S. Capitol Police, underestimated the extent to which Trump might aggravate his supporters and mobilize them to march on the U.S. Capitol. Given that Trump has repeatedly denigrated the integrity of both U.S. elections and law enforcement agencies, security services need to prepare for the Republican candidate to incite unrest once again should he fail to win the election. A recent Pew survey found that only 24 percent of U.S. citizens believe Trump would publicly acknowledge an electoral defeat—he has still not acknowledged his 2020 defeat. 

After the catastrophic security failures displayed on January 6, it is unlikely that law enforcement and intelligence agencies will once again be caught unprepared. The Department of Homeland Security recently designated January 6 as a National Special Security Event, establishing the U.S. Secret Service as lead agency and providing a range of new resources. Although this step will ensure security services put in place a more robust deterrent posture on that day, it will not deter violence on other important ceremonial dates connected to the electoral cycle, nor will it support local partners facing low-level unrest. Moreover, the Secret Service’s continued organizational challenges and protection failure earlier this year may have diminished its credibility to lead a strong response.  

Political leaders, in tandem with their counterparts in civil society, need to take a responsible approach to acknowledging the trust deficits and credibility gaps that continue to define this election and voters’ view of the integrity of the electoral process. Educating the American people about the integrity and resilience of electoral infrastructure to “pre-bunk” conspiracy theories about the vote and pushing back against any narratives that would support political violence or the intimidation of electoral workers, candidates, and voters, will be mission critical for both parties in the days directly leading up to and after the election. Safe and secure elections need to be defended from foreign interference this year, but not nearly as much as they will need to be protected from domestic actions.   

This work represents the views and opinions solely of the authors. The Council on Foreign Relations is an independent, nonpartisan membership organization, think tank, and publisher, and takes no institutional positions on matters of policy. 

This publication is part of the Diamonstein-Spielvogel Project on the Future of Democracy.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close

Top Stories on CFR

Iran

Despite the impressive advances in airpower since the 1950s—like the precision-guided munitions employed in Saturday’s attack—there is only so much airstrikes can accomplish. 

Artificial Intelligence (AI)

Sign up to receive CFR President Mike Froman’s analysis on the most important foreign policy story of the week, delivered to your inbox every Friday afternoon. Subscribe to The World This Week. In the Middle East, Israel and Iran are engaged in what could be the most consequential conflict in the region since the wars in Afghanistan and Iraq. CFR’s experts continue to cover all aspects of the evolving conflict on CFR.org. While the situation evolves, including the potential for direct U.S. involvement, it is worth touching on another recent development in the region which could have far-reaching consequences: the diffusion of cutting-edge U.S. artificial intelligence (AI) technology to leading Gulf powers. The defining feature of President Donald Trump’s foreign policy is his willingness to question and, in many cases, reject the prevailing consensus on matters ranging from European security to trade. His approach to AI policy is no exception. Less than six months into his second term, Trump is set to fundamentally rewrite the United States’ international AI strategy in ways that could influence the balance of global power for decades to come. In February, at the Artificial Intelligence Action Summit in Paris, Vice President JD Vance delivered a rousing speech at the Grand Palais, and made it clear that the Trump administration planned to abandon the Biden administration’s safety-centric approach to AI governance in favor of a laissez-faire regulatory regime. “The AI future is not going to be won by hand-wringing about safety,” Vance said. “It will be won by building—from reliable power plants to the manufacturing facilities that can produce the chips of the future.” And as Trump’s AI czar David Sacks put it, “Washington wants to control things, the bureaucracy wants to control things. That’s not a winning formula for technology development. We’ve got to let the private sector cook.” The accelerationist thrust of Vance and Sacks’s remarks is manifesting on a global scale. Last month, during Trump’s tour of the Middle East, the United States announced a series of deals to permit the United Arab Emirates (UAE) and Saudi Arabia to import huge quantities (potentially over one million units) of advanced AI chips to be housed in massive new data centers that will serve U.S. and Gulf AI firms that are training and operating cutting-edge models. These imports were made possible by the Trump administration’s decision to scrap a Biden administration executive order that capped chip exports to geopolitical swing states in the Gulf and beyond, and which represents the most significant proliferation of AI capabilities outside the United States and China to date. The recipe for building and operating cutting-edge AI models has a few key raw ingredients: training data, algorithms (the governing logic of AI models like ChatGPT), advanced chips like Graphics Processing Units (GPUs) or Tensor Processing Units (TPUs)—and massive, power-hungry data centers filled with advanced chips.  Today, the United States maintains a monopoly of only one of these inputs: advanced semiconductors, and more specifically, the design of advanced semiconductors—a field in which U.S. tech giants like Nvidia and AMD, remain far ahead of their global competitors. To weaponize this chokepoint, the first Trump administration and the Biden administration placed a series of ever-stricter export controls on the sale of advanced U.S.-designed AI chips to countries of concern, including China.  The semiconductor export control regime culminated in the final days of the Biden administration with the rollout of the Framework for Artificial Intelligence Diffusion, more commonly known as the AI diffusion rule—a comprehensive global framework for limiting the proliferation of advanced semiconductors. The rule sorted the world into three camps. Tier 1 countries, including core U.S. allies such as Australia, Japan, and the United Kingdom, were exempt from restrictions, whereas tier 3 countries, such as Russia, China, and Iran, were subject to the extremely stringent controls. The core controversy of the diffusion rule stemmed from the tier 2 bucket, which included some 150 countries including India, Mexico, Israel, Switzerland, Saudi Arabia, and the United Arab Emirates. Many tier 2 states, particularly Gulf powers with deep economic and military ties to the United States, were furious.  The rule wasn’t just a matter of how many chips could be imported and by whom. It refashioned how the United States could steer the distribution of computing resources, including the regulation and real-time monitoring of their deployment abroad and the terms by which the technologies can be shared with third parties. Proponents of the restrictions pointed to the need to limit geopolitical swing states’ access to leading AI capabilities and to prevent Chinese, Russian, and other adversarial actors from accessing powerful AI chips by contracting cloud service providers in these swing states.  However, critics of the rule, including leading AI model developers and cloud service providers, claimed that the constraints would stifle U.S. innovation and incentivize tier 2 countries to adopt Chinese AI infrastructure. Moreover, critics argued that with domestic capital expenditures on AI development and infrastructure running into the hundreds of billions of dollars in 2025 alone, fresh capital and scale-up opportunities in the Gulf and beyond represented the most viable option for expanding the U.S. AI ecosystem. This hypothesis is about to be tested in real time. In May, the Trump administration killed the diffusion rule, days before it would have been set into motion, in part to facilitate the export of these cutting-edge chips abroad to the Gulf powers. This represents a fundamental pivot for AI policy, but potentially also in the logic of U.S. grand strategy vis-à-vis China. The most recent era of great power competition, the Cold War, was fundamentally bipolar and the United States leaned heavily on the principle of non-proliferation, particularly in the nuclear domain, to limit the possibility of new entrants. We are now playing by a new set of rules where the diffusion of U.S. technology—and an effort to box out Chinese technology—is of paramount importance. Perhaps maintaining and expanding the United States’ global market share in key AI chokepoint technologies will deny China the scale it needs to outcompete the United States—but it also introduces the risk of U.S. chips falling into the wrong hands via transhipment, smuggling, and other means, or being co-opted by authoritarian regimes for malign purposes.  Such risks are not illusory: there is already ample evidence of Chinese firms using shell entities to access leading-edge U.S. chips through cloud service providers in Southeast Asia. And Chinese firms, including Huawei, were important vendors for leading Gulf AI firms, including the UAE’s G-42, until the U.S. government forced the firm to divest its Chinese hardware as a condition for receiving a strategic investment from Microsoft in 2024. In the United States, the ability to build new data centers is severely constrained by complex permitting processes and limited capacity to bring new power to the grid. What the Gulf countries lack in terms of semiconductor prowess and AI talent, they make up for with abundant capital, energy, and accommodating regulations. The Gulf countries are well-positioned for massive AI infrastructure buildouts. The question is simply, using whose technology—American or Chinese—and on what terms? In Saudi Arabia and the UAE, it will be American technology for now. The question remains whether the diffusion of the most powerful dual-use technologies of our day will bind foreign users to the United States and what impact it will have on the global balance of power.  We welcome your feedback on this column. Let me know what foreign policy issues you’d like me to address next by replying to [email protected].

RealEcon

The Global Fragility Act (GFA) serves as a blueprint for smart U.S. funding to prevent and end conflict, and bipartisan congressional leaders advocate reauthorization of the 2019 law.