Thousands of realistic but fake AI child sex images found online, report says

Fake AI child sex images moving from dark web to social media, researcher says

Child safety experts are growing increasingly powerless to stop thousands of “AI-generated child sex images” from being easily and rapidly created, then shared across dark web pedophile forums, The Washington Post reported.

This “explosion” of “disturbingly” realistic images could help normalize child sexual exploitation, lure more children into harm’s way, and make it harder for law enforcement to find actual children being harmed, experts told the Post.

Finding victims depicted in child sexual abuse materials is already a “needle in a haystack problem,” Rebecca Portnoff, the director of data science at the nonprofit child-safety group Thorn, told the Post. Now, law enforcement will be further delayed in investigations by efforts to determine if materials are real or not.

Harmful AI materials can also re-victimize anyone whose images of past abuse are used to train AI models to generate fake images.

“Children’s images, including the content of known victims, are being repurposed for this really evil output,” Portnoff said.

Normally, content of known victims can be blocked by child safety tools that hash reported images and detect when they are reshared to block uploads on online platforms. But that technology only works to detect previously reported images, not newly AI-generated images. Both law enforcement and child-safety experts report these AI images are increasingly being popularized on dark web pedophile forums, with many Internet users “wrongly” viewing this content as a legally gray alternative to trading illegal child sexual abuse materials (CSAM).

“Roughly 80 percent of respondents” to a poll posted in a dark web forum with 3,000 members said that “they had used or intended to use AI tools to create child sexual abuse images,” ActiveFence, which builds trust and safety tools for online platforms and streaming sites, reported in May.

While some users creating AI images and even some legal analysts believe this content is potentially not illegal because no real children are harmed, some United States Justice Department officials told the Post that AI images sexualizing minors still violate federal child-protection laws. There seems to be no precedent, however, as officials could not cite a single prior case resulting in federal charges, the Post reported.

As authorities become more aware of the growing problem, the public is being warned to change online behaviors to prevent victimization. Earlier this month, the FBI issued an alert, “warning the public of malicious actors creating synthetic content (commonly referred to as ‘deepfakes’) by manipulating benign photographs or videos to target victims,” including reports of “minor children and non-consenting adults, whose photos or videos were altered into explicit content.”

These images aren’t just spreading on the dark web, either, but on “social media, public forums, or pornographic websites,” the FBI warned. The agency blamed recent technology advancements for the surge in malicious deepfakes because AI tools like Stable Diffusion, Midjourney, and DALL-E can be used to generate realistic images based on simple text prompts. These advancements are “continuously improving the quality, customizability, and accessibility of artificial intelligence (AI)-enabled content creation,” the FBI warned.

Read more at: arstechnica.com