Between Manipulation and Transparency: Is AI Changing the Course of U.S. Elections?

TECHCRB
0

As the U.S. elections draw near, the race between candidates intensifies in a heated competition for the White House. In the era of technological advancement, winning votes requires more than a compelling political message—it demands a sophisticated strategy, often involving artificial intelligence (AI) as a pivotal tool in political advertising.

Despite the benefits of rapid technological progress, the role of AI in elections has raised concerns about potential manipulation and the spread of misinformation, posing a serious threat to the integrity of democratic processes in the U.S. As AI advances, public concerns about its impact on the electoral system continue to grow.

Are Concerns About AI’s Threat to Election Integrity Justified or Overstated?

Earlier this year, experts and watchdogs sounded alarms over AI’s potential to disrupt the 2024 U.S. election through misinformation campaigns involving deepfake images and targeted political ads. These fears are echoed by the public, with a recent Pew Research Center survey showing that 39% of Americans believe AI will likely be used for malicious purposes during the presidential campaign, compared to only 5% who think it will be used positively. Another 27% believe it will be applied equally for both good and bad purposes.

A majority of 57% of U.S. adults—including nearly identical shares of Republicans and Democrats—expressed serious concern that individuals or organizations might use AI to create and disseminate false or misleading information about candidates and campaigns.

Yet, as the countdown to Election Day continues, it seems that some initial fears of AI drastically altering or tainting the election outcomes may have been overstated. In a report issued by the U.S. Intelligence Community in September, officials noted that although foreign entities like Russia have employed generative AI to enhance voter influence tactics, these tools have not revolutionized such methods.

Similarly, tech insiders agree that 2024 has not seen AI reach the impactful level some expected. Betsy Hoover, co-founder of Higher Ground Labs, a political tech investment fund, noted, “Many campaigns and organizations use AI in some way, but it hasn’t reached the anticipated level of influence or the feared potential impact.”

According to TIME magazine, researchers caution that the true impact of generative AI on this election cycle is not fully understood, especially given its use on private messaging platforms. While AI’s effect on this election might seem minor, they warn it could intensify in future elections as the technology improves and becomes more accessible to the public and political operatives.

Sonny Gandhi, vice president of political affairs at Encode Justice, shared his outlook, stating, “In a year or two, AI models will undoubtedly advance. I’m particularly worried about how things will look in 2026 and certainly by 2028.”

Earlier this year, researchers at Purdue University established a project to create a database documenting incidents involving political deepfakes, recording over 500 cases so far. This database serves as a warning of AI’s growing potential to influence and perhaps manipulate electoral processes if left unchecked.

Surprising Findings on Deepfake Videos in Politics


Remarkably, most deepfake videos aren’t created to deceive but often serve as satire, educational content, or political commentary, says project researcher Kristina Walker.

According to Walker, these videos’ meanings often shift as they circulate through different political communities. She notes, “Someone posts a deepfake with a disclaimer, ‘This is a deepfake I created to illustrate X, Y, and Z.’ But after twenty retweets, someone else shares it as if it’s real.”

Another researcher, Daniel Schiff, highlights that many deepfakes seem intended to reinforce beliefs already held by viewers inclined to accept their messages as truth.

In August, Meta reported that generative AI-driven tactics had only slightly improved productivity and content generation to influence campaigns. Their findings suggest that tech industry strategies to curb the spread of AI-generated content are, for now, effective.

However, researchers remain uncertain about AI's impact on elections. Mia Hoffman, a research fellow at Georgetown’s Center for Security and Emerging Technology, says it’s challenging to assess AI's effect on voters partly because large tech companies have restricted data sharing on posts.

For instance, X (formerly Twitter) ended free access to its API, and Meta recently shut down CrowdTangle on Facebook and Instagram, making it harder for researchers to track hate speech and misinformation on these platforms.

“We’re at the mercy of what these companies share with us,” Hoffman explains.

She also expresses concern over the rise of AI-generated misinformation on closed messaging platforms like WhatsApp, which is especially popular among immigrant communities in the U.S. Hoffman believes it’s possible that AI tools may influence voters in swing states, but their true impact won’t be clear until after the elections.

“As these groups grow in electoral importance, they’re increasingly targeted with influence campaigns meant to suppress their votes or shift opinions,” she adds. “Due to the encryption of these apps, misinformation becomes harder to detect and counter.”

Surge in Political Deepfakes

TIME magazine reports that AI’s impact on global politics has been evident, with candidates in South Asia, for example, using AI to inundate the public with fake articles, images, and videos.

In February, a deepfake video surfaced, showing London Mayor Sadiq Khan making inflammatory comments before a large pro-Palestine rally. Khan noted that the fake audio fueled violent clashes between opposing protesters.

In the U.S., New Hampshire residents received AI-generated voice messages in February purportedly from President Joe Biden, urging them not to vote. The FCC quickly banned such automated calls featuring AI-generated voices.

A Democratic political advisor was criminally charged for creating these voice messages, sending a stern warning to others who might consider similar tactics. 

Translation: New Hampshire's Attorney General Warns Against AI Election Interference


New Hampshire Attorney General John Formella announced the charges with a strong message: "I hope our enforcement actions send a powerful deterrent to anyone considering election interference, whether through AI or other means."

However, these warnings haven’t deterred all political figures. Former President Donald Trump, for example, stirred controversy in August by sharing AI-generated images of Taylor Swift endorsing his candidacy, alongside altered images of Vice President Kamala Harris in communist attire.

In September, a widely circulated video from a Russian disinformation campaign accused Harris of a hit-and-run incident, amassing millions of views on social media.

A *TIME* report highlights Russia’s role in using AI maliciously, with state actors producing fake texts, images, audio, and videos targeting the U.S., often amplifying immigration fears. However, it's unclear how much impact these campaigns have had on voters. In September, the U.S. Department of Justice intervened to disrupt a Russian propaganda campaign, “Doppelganger,” which sought to undermine support for Ukraine and promote pro-Russian policies to American voters.

The U.S. Intelligence Community noted that foreign actors using AI face obstacles, including built-in restrictions in various AI tools, complicating their dissemination efforts.

Generative AI: Risks and Potential for Malicious Use


According to the U.S. Cybersecurity and Infrastructure Security Agency (CISA), generative AI refers to software using statistical models that generalize patterns in existing data to create new content, ranging from computer code and text to synthetic media like videos, images, and audio.

CISA underscores that as these generative capabilities grow, election officials must recognize how AI could affect the security and integrity of election infrastructure. While AI enhances productivity and bolsters election security and management, it also carries risks. Malicious actors, including foreign state operatives and cybercriminals, can exploit these AI tools for harmful purposes.

For the 2024 election cycle, generative AI capabilities are not expected to introduce entirely new threats but may amplify existing risks to election infrastructure, according to CISA.

Translation: CISA’s Outline of Potential Malicious Uses of Generative AI


The Cybersecurity and Infrastructure Security Agency (CISA) has provided examples of how malicious actors could exploit generative AI capabilities:

1. AI-Generated Video

  • Text-to-Video: Foreign actors can use text-to-video tools to create fabricated news videos with real news anchors, spreading disinformation as part of foreign influence campaigns.
  • Deepfake Videos: Cybercriminals use deepfake videos of public figures to mislead audiences and facilitate scams.


2. AI-Generated Images 

  • Text-to-Image: Foreign actors use text-to-image generators to create misleading visuals that distort public perception during crises.
  • AI-Altered Images: Malicious actors create synthetic profile photos for fake accounts involved in influence operations and alter original images or videos to support these narratives.


3. AI-Generated Audio

  • Text-to-Speech: Cybercriminals use AI-generated voice impersonations to access sensitive information or manipulate organizations into taking specific actions.
  • Voice Cloning: Criminals employ AI voice cloning tools to impersonate unsuspecting victims, enabling fraud and disinformation campaigns.


4. AI-Generated Text

  • Large Language Models (Text-to-Text): Foreign actors leverage AI-generated English content to fuel covert influence operations with grammatically accurate material at minimal cost.
  • Chatbots and Phishing: Cybercriminals also deploy AI-powered chatbots in advanced phishing and social engineering attacks.


According to the National Cybersecurity Center, phishing occurs when cyber attackers send fraudulent emails containing links to malicious websites, which may include ransomware that can compromise systems.

Kaspersky defines social engineering as a manipulation tactic that leverages human error to obtain private information or valuable assets. Social engineering attacks exploit users’ behaviors, manipulating them into taking actions that serve the attacker’s interests. This often involves deceiving users unaware of the full value of their personal data.

Potential Election-Related Malicious AI Uses


CISA notes that malicious actors could leverage generative AI tools to reduce costs and increase the volume of cyber incidents and foreign influence operations. They could create new malware strains capable of evading cybersecurity defenses or boost the effectiveness of distributed denial-of-service (DDoS) attacks, which could overwhelm websites, including those related to elections, by flooding them with massive data.

Possible Malicious Uses of AI Targeting Elections


Here are some potential examples of malicious uses of AI targeting elections:

1. Election Processes

  • AI-generated chatbots, voice, or videos could spread disinformation about voting times, locations, and methods through text messages, emails, social media, or print.
  • AI-driven content and tools can expand the reach and effectiveness of foreign influence operations and misinformation campaigns targeting election processes.
  • AI can also create convincing, fake voter records to deceive voters.


2. Election Offices

  • Voice cloning tools could impersonate election office staff to gain access to sensitive election management or security information.
  • AI tools can execute high-quality phishing attacks against election officials or staff, aiming to steal sensitive information.
  • AI programming tools might even generate sophisticated malware that can evade detection systems.AI-generated texts and voice clones could produce false calls to voters, disrupting contact centers and confusing officials.


3. Election Officials

  • AI-generated content, like deepfake videos, can be used to harass, impersonate, or delegitimize election officials.
  • AI can create fake audio or video impersonations of officials that spread false information to the public about election security or integrity.
  • By aggregating publicly available information, AI can assist in doxing attacks against election officials. Doxing involves exposing someone’s personal information online—such as real names, addresses, workplaces, phone numbers, and financial data—without their consent.


4. Election Vendors

  • Advanced AI phishing and social engineering tactics could be used to manipulate election technology vendors.
  • A I-generated fake videos of election vendors could be used to spread false statements that cast doubt on the security of election technology.


CISA highlights that despite the evolving landscape of AI threats, election officials are well-prepared to counter these risks effectively. Election security personnel are experienced in handling challenges like cyber fraud and foreign influence operations, both of which generative AI may amplify.
 

Amidst these challenges, will the winning candidate in the upcoming U.S. election make it to the White House unaffected by the machinery of AI? And will effective measures be in place to ensure a democracy fortified against AI-driven threats?

Post a Comment

0Comments

Post a Comment (0)