The moderation start-up ActiveFence announced that many Internet users were using artificial intelligence (AI) tools to create child pornography images. According to Avi Jager, an executive of the start-up quoted by Bloomberg, 68 batches of images of this type generated by algorithms would have been published on forums dedicated to child pornography content.

Fear of a proliferation of such online content

This observation concerns only the first four months of the year 2023. Last year, 25 series of these fake photos were published on these same platforms. This content is just as dangerous as authentic images. They also challenge specialists and authorities by their ever greater realism and their growing accessibility.

In the United States, the National Center for Missing and Exploited Children said it had not detected any massive increase in child pornography images generated by artificial intelligence. The organization still fears a proliferation of this online content and explained that it is in discussions with US lawmakers and platforms to prevent the capabilities of AI from being misused for these purposes.

Many AI companies have implemented security measures by blocking certain requests. However, these protections are not foolproof and some Internet users share tips to circumvent them. American players in the sector have also explained that they do not set a limit on user requests because these artificial images are not banned in the United States, unlike the France.

  • Tech
  • Artificial Intelligence (AI)
  • Child pornography
  • Image
  • USA