An image undoubtedly generated by AI, a Twitter account posing as Bloomberg thanks to Elon Musk's cardboard certification system, Wall Street briefly dropping before recovering: welcome to the nightmare of disinformation. A fake image showing an explosion at the Pentagon briefly went viral on Twitter on Monday, causing markets to slight slump for ten minutes, and reigniting the debate around the risks associated with artificial intelligence.

Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p

— Andy Campbell (@AndyBCampbell) May 22, 2023

Access to this content has been blocked in order to respect your choice of consent

By clicking on "I ACCEPT", you accept the deposit of cookies by external services and will thus have access to the content of our partners

I AGREE

And to better pay 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our button "I accept for today" in the banner below.

More information on the Cookie Policy page.



The fake photograph, apparently made with a generative AI program (capable of producing text and images from a simple query in everyday language), forced the US Department of Defense to react. "We can confirm that this is false information and that the Pentagon was not attacked today," a spokesman said. Firefighters from the area where the building is located (in Arlington, near Washington) also intervened to indicate on Twitter that no explosions or incidents had taken place, either at the Pentagon or nearby.

Liability of Twitter

Fortunately, it was easy to see the deception here: the building does not look like the Pentagon, and there are many visual bugs in the sidewalk, security barriers and windows. The image was, it seems, first published by the compte@BloombergFeed certified by a blue tick thanks to the new paid system launched by Elon Musk. Except that the latter does not verify that an account is affiliated with an organization. Many netizens, including some with hundreds of thousands of followers, have fallen for it. The fake news was picked up by the Russian channel RT, which later retracted it.

The image seems to have caused the markets to drop slightly for a few minutes, with the S&P 500 losing 0.29% compared to Friday before recovering. There was a drop related to this false information when the machines detected it," said Pat O'Ha'e of Briefing.com, referring to automated trading software that is programmed to react to social media posts.

Concern for the US presidential election

The incident comes after several fake photographs produced with generative AI were widely relayed to show the capabilities of this technology, such as that of the arrest of former US President Donald Trump or that of the Pope in a down jacket.

Software like DALL-E 2, Midjourney and Stable Diffusion allow hobbyists to create compelling fake images without needing to master editing software like Photoshop.

But if generative AI facilitates the creation of fake content, the problem of its dissemination and virality - the most dangerous components of disinformation - is a matter for platforms, experts regularly remind us.

"Users are using these tools to generate content more efficiently than before (...) but they still spread via social media," Sam Altman, the boss of OpenAI (DALL-E, ChatGPT), said at a congressional hearing in mid-May. A particularly sensitive subject in the run-up to the US presidential election in November 2024, which is likely to mobilize fact-checkers full-time.

  • Tech
  • By the Web