Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

Thousands of AI videos featuring sexualised minors shared on TikTok, report finds

The TikTok logo is displayed n a mobile phone in front of a computer screen displaying the TikTok home screen, Oct. 14, 2022, in Boston
The TikTok logo is displayed n a mobile phone in front of a computer screen displaying the TikTok home screen, Oct. 14, 2022, in Boston Copyright  AP Photo/Michael Dwyer, File
Copyright AP Photo/Michael Dwyer, File
By Anna Desmarais
Published on
Share Comments
Share Close Button

TikTok videos show young girls in sexualised clothing or in suggestive positions with links in the comments to child pornography selling groups on Telegram.

Artificial intelligence (AI) generated videos that show young girls in revealing clothes or positions have gained millions of likes and shares on TikTok despite that content being prohibited, according to a report.

The fact-checking organisation in Spain, Maldita, found more than 20 accounts on the platform that published over 5,200 videos featuring young girls wearing bikinis, school uniforms, and tight clothing. Overall, these accounts already have a combined following of over 550,000 followers and almost 6 million likes.

In the comments are links to external platforms such as Telegram communities that sell child pornography, the analysis shows. Maldita said it reported the 12 Telegram groups found in its study to the Spanish police.

The accounts also generate a profit by selling AI-generated videos and images through TikTok’s subscription service, which pays creators a monthly fee for access to their content. The platform gets roughly 50 per cent of the profits through this model, according to its agreement with creators.

The report comes as countries around the world, such as Australia, Denmark, and the European Union, have either enforced social media restrictions for users under 16 or are discussing them as a way of keeping young people safe online.

TikTok requires that content creators label when AI has been used in the creation of a video. The content may also be removed from the social media platform if it’s considered “harmful to individuals,” according to their community guidelines.

However, the Maldita report found that most of the videos they analysed did not have any watermarks or other identifications that AI was used in them.

Some videos had a “TikTok AI Alive” watermark that is automatically used to take still images and turn them into videos within the platform.

In a statement to Euronews Next, Telegram and TikTok say they are "fully committed" to preventing child sexual abuse material on its platform.

Telegram scans all media uploaded to its public platform and compares it to child sex abuse materials already removed from the platform to prevent it from being spread.

"The fact that criminals must use private groups and another platform's algorithms to grow is proof of the effectiveness of Telegram's own moderation," the statement reads.

Telegram said it removed more than 909,000 groups and channels that had child sex abuse materials in 2025.

As for TikTok, it said 99 percent of content that is harmful to minors is removed automatically and another 97 percent of offending AI-generated content was also removed proactively.

The platform says it reacts immediately to suppress or close accounts that share sexually explicit content of children and signals it the United States' National Center for Missing and Expolited Children (NCMEC).

TikTok also told CNN that it removed more than 189 million videos and banned more than 108 million accounts between April and June 2025.

This story was updated with comments from Telegram and TikTok.

Go to accessibility shortcuts
Share Comments

Read more