'Violating and dehumanising': How AI deepfakes are being used to target women

How deepfakes trap women in porn hell?
How deepfakes trap women in porn hell? Copyright Canva
Copyright Canva
By Oceane DuboustThomas Duthois, Matthew Ashe, Estelle Nilsson-Julien
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

Experts describe the use of a person’s images to create deepfakes using artificial intelligence (AI) without their consent as "dehumanising".

ADVERTISEMENT

Deepfake technology has soared in popularity over the last few years, and it never has been easier to make them, with people able to doctor video or audio in a hyper-realistic way.

Just a few pictures of someone’s face are needed to create a deepfake.

Some use it for fun as it’s the basis of face-swap apps and multiple filters on social media, but there are also several malicious uses from fake videos of politicians to scams.

Women are particularly targeted by deepfake pornography which is often broadcast without consent. 

In 2019, Deeptrace, a company specialised in artificial intelligence (AI), estimated that porn made up 96 per cent of deepfake videos, the vast majority created without the consent of the featured person.

"This is violating. This is dehumanising. And the reality is that we know, this could impact a person's employability. This could impact a person's interpersonal relationships and mental health," said Noelle Martin, an activist and a researcher at the University of Western Australia, who has dealt with image-based abuses for ten years.

In one of the most recent cases, teenage girls as young as 11 were targeted and the pictures were shared with their high school classmates via social media.

A phenomenon targeting women

The phenomenon of deepfakes was first observed in 2017 on specialised forums like Reddit with female celebrities targeted due to the large amount of images available on the web.

"One of the most disturbing trends I see on forums of people making this content is that they think it's a joke or they don't think it's serious because the results aren't hyperrealistic, not understanding that for victims, this is still really, really painful and traumatic," said Henry Ajder, an expert on generative AI.

But now, as the technology becomes more widely available, all women are concerned with serious risks of social and psychological repercussions.

"It's horrifying and shocking to see yourself depicted in a way that you didn't consent to," said Martin.

"The reality is that. We know, this could impact a person's employability. This could impact a person's interpersonal relationships and mental health," she added.

Sometimes, deepfakes are made as a way to discredit the work of women.

Activist Kate Isaacs and journalist Rana Ayyub were both victims of smear campaigns using deepfakes due to their professional activities.

Moreover, it’s not easy to detect a deepfake, especially as the technology improves.

"Realistically, the individual and the naked eye of the individual is just not going to be a reliable marker for spotting fakes. You know, even now, but particularly moving into the future as the outputs get better in quality," said Ajder.

In a recent public service announcement from the US Federal Bureau of Investigation (FBI), it said they continue "to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content".

There were even cases in which deepfake porn was used to extort people.

ADVERTISEMENT

"As of April 2023, the FBI has observed an uptick in sextortion victims reporting the use of fake images or videos created from content posted on their social media sites or web postings, provided to the malicious actor upon request, or captured during video chats," the announcement reads.

Crimes go unpunished

Hard to detect, deepfakes are also hard to prosecute as lawmakers attempt to play catch-up with the technology.

At the end of 2022, the UK’s Ministry of Justice said that sharing deepfakes without a person's consent could result in imprisonment. Taiwan has also implemented a similar bill.

In Europe, the Digital Services Act (DSA) doesn’t address the issue of non-consential deepfakes. However, the EU AI Act, the draft of which has just finished being negotiated, should provide a more robust legal framework. 

In the United States, several states including California, Texas, and Virginia have made non-consensual deepfakes a criminal offense.

ADVERTISEMENT

"There has to be some sort of global response from government, law enforcement, from people on the ground, victim communities. So there is accountability for people where they can't just ruin someone's life and get away with it and face no repercussions," said Martin.

For more on this story, watch the video in the media player above.

Share this articleComments

You might also like