How we were deepfaked by election deepfakes
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Around this time last year, you probably read dozens of dire warnings about generative artificial intelligence’s impact on 2024’s bumper crop of global elections.
Deepfakes would supercharge political disinformation, leaving muddled voters unable to tell fact from fiction in a sea of realistic, personalised lies, the story went. Leaders from Sadiq Khan to the Pope spoke out against them. A World Economic Forum survey of experts ranked AI disinformation as the second-most pressing risk of 2024.
Sure enough, dozens of examples were widely reported. Joe Biden’s “voice” on robocalls urged primary voters to stay home; AI-generated videos of non-existent members of Marine Le Pen’s family making racist jokes were viewed millions of times on TikTok while a fake audio clip of Sir Keir Starmer swearing at a staffer went viral on X.
But many experts now believe there is little evidence that AI disinformation was as widespread or impactful as was feared.
The Alan Turing Institute identified just 27 viral pieces of AI-generated content during the summer’s UK, French and EU elections combined. Only around one in 20 British people recognised any of the most widely shared political deepfakes around the election, a separate study found.
In the US, the News Literacy Project catalogued almost 1,000 examples of misinformation about the presidential election. Just 6 per cent involved generative AI. According to TikTok, removals of AI-generated content did not increase as voting day neared.
Mentions of terms such as “deepfake” or “AI-generated” in X’s user-submitted fact-check system, Community Notes, were more correlated with the release of new image generation models than major elections, a Financial Times analysis found.
The trend held in non-western countries, too: a study found just 2 per cent of misinformation around Bangladesh’s January election was deepfakes. South Africa’s polarised election was “marked by an unexpected lack” of AI, researchers concluded.
Microsoft, Meta and OpenAI all reported uncovering covert foreign operations attempting to use AI to influence elections this year, but none succeeded in finding a wide audience.
Much of the election-related AI content that did catch on wasn’t intended to trick voters. Instead, the technology was often used for emotional arguments — creating images that felt supportive of a certain narrative, even if they were clearly unreal.
Kamala Harris addressing a rally decked out with Soviet flags, for instance, or an Italian child eating a cockroach-topped pizza (in reference to the EU’s supposed support for insect diets). Deceased politicians were “resurrected” to support campaigns in Indonesia and India.
Such “symbolic, expressive, or satirical messages” are in line with traditional persuasion and propaganda tactics, according to Daniel Schiff, an expert in AI policy and ethics at Purdue University. Around 40 per cent of political deepfakes that a Purdue team identified were at least partly intended as satire or entertainment.
What about the “liar’s dividend”? This is the idea that people will claim that legitimate content showing them in a bad light is AI-generated, potentially leaving voters feeling that nothing can be believed any more.
An Institute for Strategic Dialogue analysis did find widespread confusion over political content on social media, with users frequently misidentifying real images as AI generated. But most are able to apply healthy scepticism to such claims. The share of US voters who said it was difficult to understand what news is true about candidates fell between the 2020 and 2024 elections, according to Pew Research.
“We’ve had Photoshop for ages, and we still largely trust photos,” says Felix Simon, a researcher at Oxford university’s Reuters Institute for the Study of Journalism who has written about deepfake fears being overblown.
Of course, we cannot let our guard down. AI technology and its social impacts are advancing rapidly. Deepfakes are already proving a dangerous tool in other scenarios, such as elaborate impersonation scams or pornographic harassment and extortion.
But when it comes to political disinformation, the real challenge has not changed: tackling the reasons why people are willing to believe and share falsehoods in the first place, from political polarisation to TikTok-fuelled media diets. While the threat of deepfakes may grab headlines, we should not let it become a distraction.
#deepfaked #election #deepfakes