AI-generated content is more explicit, extreme and complex than other types of child pornography that have been seen in the past, says the Internet Watch Foundation.
Artificial intelligence-generated imagery depicting the sexual abuse of children surged by 14 percent in 2025, as investigators face growing difficulty distinguishing synthetic content from real photographs, according to a new report.
The Internet Watch Foundation (IWF), a British non-profit dedicated to removing child sexual abuse material (CSAM) from the internet, identified more than 8,000 AI-generated images and videos from user reports over the past year.
The IWF’s researchers label content as AI-generated if there are any obvious mistakes in the imagery, if the victim claims it is AI, or if there’s any information from the original creator about how AI was used.
While AI-generated sexual content is still a relatively small share of what the organisation sees online, the IWF warns that it is increasing at an alarming pace.
Of the AI-generated content, more than 3,400 involved “full-motion” AI-generated videos that are hyper-realistic and allow for multiple people to interact in the videos.
Over 65 percent of these videos depicted the most severe forms of abuse, including rape, sexual torture, and bestiality, which are classified as the highest category of child sexual content under British law.
Meanwhile, only 43 percent of non-AI-generated sexual content fell under the most extreme categories. The IWF said this suggests that perpetrators are using AI tools to make more explicit, extreme and complex content than they were before.
“We now face a technological landscape that can generate infinite violations with unprecedented ease,” Kerry Smith, the IWF’s CEO, said in the report.
How are perpetrators using AI?
The study also sheds light on how offenders are actively developing and sharing tools.
Researchers observed discussions on the dark web where perpetrators trade and work together to develop custom AI models and databases that generate abusive material.
In one example, researchers identified an advertisement offering “custom courses” that promised to teach users how to create AI-generated images of teenagers.
“Single applications can now generate abusive imagery with minimal effort, removing the need for technical expertise and significantly lowering barriers to entry,” the report found. In many cases, models require only a single reference image to produce child sexual content.
While AI is making it easier for anyone to create simple CSAM content, the report said there are a few well-known creators with more advanced skills who make longer, more sophisticated material. For example, one creator was thanked over 3,000 times for creating a 30-minute AI-generated sexual abuse video.
The IWF noted that the material they’ve been able to gather gives only “a partial view” of the overall amount of child sexual material that is floating online and is likely to be “significantly greater” than what it has found.
That is because IWF analysts are limited in where they can look, because they are unable to go beyond payment walls or into encrypted spaces, the report said.
EU-wide ban could be considered a ‘minimum standard’
Smith urged the European Union to consider implementing a bloc-wide ban on both AI-generated child sexual abuse content and the tools used to create it.
Any ban would also make it illegal for users to make their own personalised content, even though it is not being shared, the report said.
“Advances in technology should never come at the expense of a child’s safety and well-being,” Smith said, adding that such a ban should serve as a “minimum standard with no exceptions”.
The report comes shortly after European legislators approved a temporary extension to the ePrivacy Directive, a piece of legislation that allows internet platforms to detect CSAM content.
The extension, set to expire on April 3, is intended to give lawmakers time to agree on a long-term legal framework to combat child sexual abuse online.
In a press release, legislators said that any future measures must remain “proportional” and should apply only to content already flagged as potential child sexual material, instead of enabling surveillance of all encrypted conversations.
The IWF said it also wants the EU AI Act to be amended to label AI systems that can be used to generate child sexual content as “high risk.” Under the Act, a “high risk” designation would mean systems have to undergo more rigorous testing before being made available in the EU.
This designation would mean less CSAM content because the tools would be more thoroughly tested, the report added.
Using AI for sexual exploitation of children and child pornography is already illegal under the EU AI Act, and the legislation prevents any system that explicitly does this from being available in the bloc.