The volume of

AI

-generated

misinformation

, particularly

election-related

deepfake

images, has increased by an average of 130%

per month

on X

over the past year.

These are some data from a study published by the Center for Countering Digital Hate (CCDH), a British non-profit organization committed to fighting online hate speech.

To measure the increase of the phenomenon - the latest fake photos concern the American election campaign and portray

Trump surrounded by African-American supporters

, generated by his supporters - the study examined the four most popular image generators: Midjourney, DALL-E 3 by OpenAI , Stability AI's DreamStudio or Microsoft's Image Creator.

In particular, the 130% figure on

All the companies examined, among other things, have put down on paper policies against the creation of misleading content and have joined an agreement among the big tech companies to prevent misleading AI content from interfering with the 2024 elections, not only In the USA.

The researchers said the AI ​​tools generated images in 41% of their tests and were more susceptible to requests requesting photos depicting election fraud, such as ballots in the trash, rather than images of Biden or Trump.

According to the analysis, Chat Gpt Plus and Image Creator managed to block all requests when asking for candidate images;

Midjourney performed the worst among the tools, generating misleading images in 65% of tests.

“The possibility of AI-generated images serving as photographic evidence could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections,” the researchers say.