‘Society needs to be alert’: Most people are unaware AI is being used to create child abuse content

There have been a rising number of reports about AI-generated child sex abuse material, according to a UK charity.
There have been a rising number of reports about AI-generated child sex abuse material, according to a UK charity. Copyright Canva
Copyright Canva
By Oceane Duboust
Share this articleComments
Share this articleClose Button

There have been a rising number of reports about AI-generated child sex abuse material, according to a UK charity.

ADVERTISEMENT

Even though a majority of people are concerned about artificial intelligence (AI), roughly 70 per cent are unaware that it’s already being used to create child sexual abuse content, a new survey shows.

The Lucy Faithfull Foundation, a UK-based child protection charity, surveyed over 2,500 people in the UK and around 88 per cent of people surveyed agreed that AI-generated sexual images of under-18s should be illegal.

However, just 40 per cent of those surveyed either didn’t know it was the case or thought that this content was legal in the UK.

In the EU and the UK, it is illegal to create, view, or share sexual images of minors, including material which has been created using AI technology.

This is a way of sexualising children. Children deserve our respect. They have rights. We must not trample over their rights and that there are consequences for doing so
Donald Findlater
Stop It Now helpline director

The UK charity, which also runs the Stop It Now helpline a confidential service to help people worried about their thoughts and behaviour towards children or another adult’s behaviour, is raising the alarm.

Donald Findlater, the helpline’s director, emphasised that the rapid evolution of AI is already being exploited online by child sex offenders every day.

“Every day we are called by people who are being arrested by the police here in the UK for viewing sexual abuse of children, including an increasing number who are viewing AI-generated images,” he told Euronews Next, explaining that there has been a rise in the number of reports about AI-generated child sex abuse material in the last year.

“Society needs to be alert to the issue,” he said as there are “serious knowledge gaps amongst the public” regarding how AI is used to create harmful content.

“It's not okay to look at these images. This is a way of sexualising children. This is not an appropriate thing to do. Children deserve our respect. They have rights. We must not trample over their rights and that there are consequences for doing so,” he added, inviting people directly or indirectly concerned to seek help.

More than 217,000 people contacted the Stop It Now helpline in 2023 with around 80 per cent of them worrying about themselves.

Rising phenomenon of AI sexual content

The spread of AI-generated child sexual abuse material was the subject of an alarming report published last year by the Internet Watch Foundation (IWF).

Out of 11,000 AI-generated images on a dark web child sexual abuse forum assessed by analysts from IWF, more than 2,500 were assessed as criminal.

"We’re seeing AI CSAM [child sexual abuse material] images using the faces of known, real, victims. We’re seeing the ‘de-ageing’ of celebrities and AI CSAM using the likeness of celebrity children," said IWF CEO Susie Hargreaves.

“We’re seeing how technology is ‘nudifying’ children whose clothed images have been uploaded online for perfectly legitimate reasons. And we’re seeing how all this content is being commercialised.”

Findlater added that people “shouldn't fool themselves into thinking that there are no consequences to doing this”.

“Where technology goes, abuse follows. We are dealing with the impacts of AI-generated sexual images of children here and now, up and down the country,” Ian Critchley, National Police Chiefs' Council (NPCC) Lead for Child Protection and Abuse Investigation, said in a statement.

“Creating, viewing and sharing sexual images of children – including those made by AI – is never victimless, and it is against the law. And we will find you,” Critchley added.

The UK police made 1,700 arrests in a year by using online undercover officers though not all were linked to AI-created content.

ADVERTISEMENT

AI-generated content also creates new difficulties for investigators when it comes to identifying victims from virtual creations, with a recent New York Times article reporting that law enforcement has been struggling as AI technology advances.

Platforms not doing enough

Experts would also like to see more regulation of AI companies and social media platforms on this issue.

Researchers found last year more than 3,200 imagesof suspected child sexual abuse in the dataset created by LAION, a non-profit organisation, used to train the generative AI tool Stable Diffusion.

A few weeks before, Stable Diffusion was highlighted as a “tool favoured by producers of child sex abuse imagery” as it “cannot in practice prevent its models from generating images that would contravene its terms of use,” the Internet Watch Foundation said.

“I would love to see increasing regulation. I would like to see these kinds of tools developed by AI companies developing their technology so that they can't be misused, and that therefore child sexual abuse images can't be created and shared,” Findlater said.

ADVERTISEMENT

X (formerly Twitter) was fined€366,742 by Australia in October 2023 for failing to fully explain how it tackled child sexual exploitation content.

More recently, Meta’s decision to roll out end-to-end encryption in December 2023 was also met with concerns that it would give a safe haven online to child abusers.

Acknowledging that the detection processes are insufficient, the EU decided to extend an interim measure to combat child sexual abuse content until 3 April 2026.

It allows internet providers to search their services for this type of content and report it to authorities.

Share this articleComments

You might also like