Generative AI fueling spread of deepfake pornography across the internet

Technology advances have made it easier for bad actors to create deepfake content
Technology advances have made it easier for bad actors to create deepfake content Copyright Canva
Copyright Canva
By Luke Hurst
Share this articleComments
Share this articleClose Button

An analysis has revealed how the overwhelming majority of deepfake videos found online are pornography.

ADVERTISEMENT

There has been a 550 per cent increase in the number of deepfake videos found online this year compared to 2019, an analysis has found, with the vast majority of them being deepfake pornography videos.

The rise of new technologies like generative artificial intelligence (AI) have led to a huge increase in the creation of deepfake content, which is digitally altered or generated content purporting to depict a real person or scenario, whether through video, image, or audio.

Victims of deepfake pornography have described how having their person portrayed in these fictitious scenarios can destroy lives, with one calling it “a lifelong sentence”.

Deepfake content is created via machine learning algorithms which can produce hyper-realistic content. Bad actors can use this to target victims, blackmail people, or as a form of criminal or political manipulation.

A comprehensive report into deepfakes in 2023 found that deepfake pornography makes up 98 per cent of all deepfake videos found online, while 99 per cent of the victims targeted by deepfake pornography are women.

Analysts for a website aiming to protect people from online identity fraud - called homesecurityheroes.com - studied 95,820 deepfake videos, 85 dedicated online channels, and more than 100 websites linked to the deepfake ecosystem, producing its 2023 State of Deepfakes report

One of the main findings of the report is that it can now take less than 25 minutes, and cost nothing, to create a minute-long deepfake pornographic video of anyone using a single clear face image of the victim.

The majority of deepfake pornography videos analysed by the website were of South Korean women. The nationalities most represented in terms of deepfake pornography victims after South Korea were the United States, Japan, and the United Kingdom.

Epidemic of deepfake porn in South Korea

South Korea is the nationality with by far the most victims of deepfake porn, something the report analysts attribute to the global popularity of K-pop. Three of the four members of Blackpink, seen as Korea’s biggest girl pop group, are among the top 10 most targeted individuals.

“K-pop idols are known for their widespread visibility and fan following, both within South Korea and internationally,” explained Chris Nguyen, head of research analysis for the report. “Their public profiles and extensive fan base mean that the creation and distribution of deepfake pornography involving them are more likely to reach a larger audience.

As well as their high visibility, there is also “exceptionally high” demand for content featuring K-pop idols, Nguyen said. “Some exploit this demand by creating explicit deepfake content, particularly on dedicated adult websites, aiming to attract attention and generate more traffic.”

He also pointed out how South Korea’s strict regulations on pornography were likely playing a role in driving the creation and distribution of such content.

“This might be related to the ‘Streisand effect’, an unintended outcome of trying to conceal, delete, or censor information. In such cases, the result is a backfire, as it actually amplifies awareness and interest in that information."

Deepfakes spreading across the internet

When looking at the prevalence of deepfake pornography, the analysts found that seven of the top ten most visited pornographic websites hosted deepfake content.

Across the top ten dedicated deepfake pornography websites there have been more than 303 million video views, which the analysts say shows how widespread and popular such content is becoming.

They call for conversations around attitudes towards such content, with a focus needed on the ethical considerations of creating this content.

The researchers attribute the sudden rise in deepfake content to two things. Firstly, the emergence of a type of machine learning framework used for generative AI, called a generative adversarial network (GAN), has provided the technical means for creating deepfake content.

Secondly, the easy access to tools built on top of GANs means almost anyone can quickly and cheaply create deepfake content.

ADVERTISEMENT

“These platforms offer a range of features, from simple face swapping to more complex video manipulations,” the report explains. The authors add that the growth of online communities dedicated to deepfake creation has “fostered collaboration, knowledge sharing, and the development of open-source projects”.

Share this articleComments

You might also like