A photo of Donald Trump arrested, a video showing a dark future in case of re-election of President Joe Biden, or the audio recording of an argument between the two men. These social media posts have one thing in common: they are completely fake.

All were created using artificial intelligence (AI), a burgeoning technology. Experts fear it could provoke a deluge of false information during the 2024 presidential election, arguably the first ballot where its use will be widespread.

The temptation of all sides

Democrats and Republicans alike will be tempted to use AI, cheap, accessible and poorly regulated legally, to better seduce voters or produce leaflets in a snap of their fingers.

But experts fear the tool could also be used to wreak havoc in a divided country, where some voters still believe the 2020 election was stolen from former President Donald Trump, despite evidence to the contrary.

In March, fake AI-generated images showing him being stopped by police officers went viral, offering a glimpse of what the 2024 campaign might look like. Last month, in response to Joe Biden's candidacy announcement, the Republican Party released a video, also made via AI, predicting a nightmarish future if he were re-elected. The realistic images, while false, showed China's invasion of Taiwan or a collapse in financial markets.

'New tools to fuel hatred'

And earlier this year, an audio recording in which Donald Trump and Joe Biden insult each other copiously made the rounds on TikTok. It was of course false, and, again, produced thanks to AI.

For Joe Rospars, founder of the digital agency Blue State, ill-intentioned people have, with this technology, "new tools to fuel hatred" and to "bamboozle the press and the public". Fighting them "will require vigilance from the media, tech companies and voters themselves," he said. Whatever the intentions of the person using it, the effectiveness of AI is undeniable.

When AFP asked ChatGPT to create a political newsletter in favor of Donald Trump, providing him with false information that he spread, the interface pulled, in a few seconds, a text licked and riddled with lies. And when the robot was asked to make the text "more aggressive," it regurgitated these false claims in an even more catastrophic tone.

Mistrust of the media doesn't help

"Right now, AI is lying a lot," says Dan Woods, a former official in Joe Biden's 2020 campaign. "If our foreign adversaries only have to convince an already delusional robot to spread disinformation, we should prepare for a much more intense disinformation campaign than in 2016," he said.

At the same time, this technology can also help better understand voters, especially those who do not vote or vote little, says Vance Reavie, boss of Junction AI. Artificial intelligence allows "to understand precisely what interests them and why, and from there we can determine how to involve them and what policies will interest them," he says.

It could also save campaign teams time when they have to write speeches, tweets or questionnaires for voters. But "a lot of the content generated will be fake," notes Vance Reavie. The distrust of many Americans vis-à-vis the mainstream media does not help.

Even easier to lie

"The fear is that as it becomes easier to manipulate the media, it will be easier to deny reality," said Hany Farid, a professor at the University of California, Berkeley. "If, for example, a candidate says something inappropriate or illegal, they can simply say that the registration is false. This is particularly dangerous."

Despite the fears, the technology is already at work. Betsy Hoover of Higher Ground Labs told AFP her company was developing a project to write and evaluate the effectiveness of AI-powered fundraising emails.


"Bad actors will use every tool at their disposal to achieve their goal, and AI is no exception," said the former official in Barack Obama's 2012 campaign. "But I don't think that fear should stop us from taking advantage of AI."

  • World
  • USA
  • Donald Trump
  • Joe Biden
  • Artificial Intelligence (AI)