Skip to main content
AI

What happened to the ‘AI election’?

Taking stock of how much of a role AI actually did (or didn’t) play in the US elections.
article cover

Emily Parsons

4 min read

For much of 2024, experts sounded the alarm about the harm deepfakes and other AI threats pose to political contests, with many predicting that the US presidential race would be the first “AI election.”

But as the dust settles, it’s not entirely clear how much of a role the technology did end up playing. While there were certainly attempts to use deepfakes and AI-fueled misinformation to sway voter opinion, there’s not much clear-cut evidence it had an impact, though some experts caution we don’t yet know the full story.

There were, of course, a few widely reported instances of AI’s role in the election—a faked robocall that mimicked President Biden’s voice, rising tendencies to dismiss real campaign photos as AI-generated. And a report from OpenAI documented more than 20 election influence operations the company shut down, but noted that they hadn’t gained much traction.

Jennifer Stromer-Galley, a professor at Syracuse University’s School of Information Studies who studies digital communications and elections, said AI ended up being just one smaller piece in a wider flood of election-related misinformation.

“As it turns out, the concerns around AI and deepfakes in this election didn’t really come to pass,” Stromer-Galley told Tech Brew. “I think [AI] still is very much an issue. But there are really big issues in our electoral space right now, and AI is a red herring.”

A couple initial reports sizing up the role of AI don’t point to any major new incidents, but do say that the technology is “polluting our information environment and undermining trust in the wider democratic system,” as Sam Stockwell, research associate at the Alan Turing Institute’s Centre for Emerging Technology and Security, wrote in one dispatch. A report from the Institute for Strategic Dialogue echoed this concern.

“As the year comes to a close, we can see that we did encounter many challenges, but the impacts were not always clear-cut,” Stockwell wrote in the report. “AI was used in malicious ways in most major elections, but there is a lack of evidence that it measurably affected any election results.”

Too soon to tell

On the other hand, David Evan Harris, a Chancellor’s Public Scholar at UC Berkeley who studies AI misinformation, said we might never have a full account of the role of deepfakes in the election without government-mandated cooperation from platforms.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

“We didn’t really have the picture that we have today of what happened in the 2016 election until around 2018; it took a couple years to really uncover what happened, and that involved the Mueller investigation,” Harris told Tech Brew.

While laws that require platform disclosures in the European Union could give us a clue to the overall picture, Harris said we might never have the same accounting of the election media-scape that we had of the 2016 presidential election.

And while some researchers are using the 2016 election as a blueprint for how to study misinformation, Harris said, generative AI might make for more personalized misinformation that might be concentrated in targeted ads and encrypted messaging platforms with particularly low visibility.

“We should actually assume that election interference in the age of generative AI probably looks different,” Harris said.

Scaling up in the future

Looking toward future elections, the lack of understanding about AI disruptiveness will be a big problem, according to Eric Wengrowski, founder and CEO of AI watermarking startup Steg.ai.

“One of the biggest issues right now is that we don’t actually have a very good way of measuring the impact that deepfakes and adversarial use of generative AI is having right now,” Wengrowski said.

While bad actors may just be trying their hand at early experimental AI campaigns, these efforts will likely scale up in the future as AI becomes more widespread and easily accessible, according to Wengrowski.

“I think the scale is going to increase. So I think you’re going to start to see more of these threats, and it’s just going to be more and more difficult to discern what is legitimate content against misinformation,” he said.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.