AI-generated child sexual abuse images could flood the Internet without action, watchdog warns

Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse becomes overwhelming.
Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse becomes overwhelming. Copyright Canva
Copyright Canva
By Euronews with AP
Share this articleComments
Share this articleClose Button

The already concerning rise in child sexual abuse imagery on the Internet could worsen unless measures are taken, a watchdog has warned.

ADVERTISEMENT

The already-alarming proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Tuesday.

In a written report, the UK-based Internet Watch Foundation (IWF) urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.

"We're not talking about the harm it might do," said Dan Sexton, the watchdog group's chief technology officer.

"This is happening right now and it needs to be addressed right now".

The IWF investigated more than 11,000 images shared on dark web forums in a one-month period. Among them, more than 2,500 were depicting child abuse and one in five were classified as Category A - meaning that they depicted the most severe abuses.

In a first-of-its-kind case in South Korea, a man was sentenced in September to two and a half years in prison for using artificial intelligence to create 360 virtual child abuse images, according to the Busan District Court in the country’s southeast.

In some cases, kids are using these tools on each other. 

At a school in southwestern Spain, police have been investigating teens’ alleged use of a phone app to make their fully dressed schoolmates appear nude in photos.

An investigation into the dark web's forums

The report exposes a dark side of the race to build generative AI systems that enable users to describe in words what they want to produce — from emails to novel artwork or videos — and have the system spit it out.

If it isn’t stopped, the flood of deepfake child sexual abuse images could bog investigators down trying to rescue children who turn out to be virtual characters. Perpetrators could also use the images to groom and coerce new victims.

Sexton said IWF analysts discovered faces of famous children online as well as a “massive demand for the creation of more images of children who’ve already been abused, possibly years ago.”

“They’re taking existing real content and using that to create new content of these victims,” he said. “That is just incredibly shocking.”

Sexton said his charity organization, which is focused on combating online child sexual abuse, first began fielding reports about abusive AI-generated imagery earlier this year.

That led to an investigation into forums on the so-called dark web, a part of the internet hosted within an encrypted network and accessible only through tools that provide anonymity.

What IWF analysts found were abusers sharing tips and marvelling about how easy it was to turn their home computers into factories for generating sexually explicit images of children of all ages. 

Some are also trading and attempting to profit off such images that appear increasingly lifelike.

"What we’re starting to see is this explosion of content," Sexton said.

While the IWF's report is meant to flag a growing problem more than offer prescriptions, it urges governments to strengthen laws to make it easier to combat AI-generated abuse. 

ADVERTISEMENT

It particularly targets the European Union, where there's a debate over surveillance measures that could automatically scan messaging apps for suspected images of child sexual abuse even if the images are not previously known to law enforcement.

A big focus of the group’s work is to prevent previous sex abuse victims from being abused again through the redistribution of their photos.

A plea for more regulation

The report says technology providers could do more to make it harder for the products they've built to be used in this way, though it's complicated by the fact that some of the tools are hard to put back in the bottle.

A crop of new AI image-generators was introduced last year and wowed the public with their ability to conjure up whimsical or photorealistic images on command. But most of them are not favoured by producers of child sex abuse material because they contain mechanisms to block it.

Technology providers that have closed AI models, with full control over how they're trained and used — for instance, OpenAI's image-generator DALL-E — appear to have been more successful at blocking misuse, Sexton said.

ADVERTISEMENT

By contrast, a tool favoured by producers of child sex abuse imagery is the open-source Stable Diffusion, developed by London-based startup Stability AI. 

When Stable Diffusion burst onto the scene in the summer of 2022, a subset of users quickly learned how to use it to generate nudity and pornography. While most of that material depicted adults, it was often nonconsensual, such as when it was used to create celebrity-inspired nude pictures.

We need to have discussions about the darker side of this amazing technology.
Susie Hargreaves
CEO, Internet Watch Foundation

Stability later rolled out new filters that block unsafe and inappropriate content, and a license to use Stability's software also comes with a ban on illegal uses.

In a statement released Tuesday, the company said it "strictly prohibits any misuse for illegal or immoral purposes" across its platforms. 

"We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes," the statement reads.

ADVERTISEMENT

Users can still access unfiltered older versions of Stable Diffusion, however, which are “overwhelmingly the software of choice... for people creating explicit content involving children,” said David Thiel, chief technologist of the Stanford Internet Observatory, another watchdog group studying the problem.

"You can’t regulate what people are doing on their computers, in their bedrooms. It’s not possible," Sexton added. "So how do you get to the point where they can’t use openly available software to create harmful content like this?"

Most AI-generated child sexual abuse images would be considered illegal under existing laws in the US, UK, and elsewhere, but it remains to be seen whether law enforcement has the tools to combat them.

The IWF's report is timed ahead of a global AI safety gathering next week hosted by the British government that will include high-profile attendees including US vice president Kamala Harris and tech leaders.

"While this report paints a bleak picture, I am optimistic," IWF CEO Susie Hargreaves said in a prepared written statement. 

ADVERTISEMENT

She said it is important to communicate the realities of the problem to “a wide audience because we need to have discussions about the darker side of this amazing technology".

Share this articleComments

You might also like