By Shane Raymond
– “Do you want to see yourself acting in a movie or on TV?” said the description for one app on online stores, offering users the chance to create AI-generated synthetic media, also known as deepfakes.
“Do you want to see your best friend, colleague, or boss dancing?” it added. “Have you ever wondered how would you look if your face swapped with your friend’s or a celebrity’s?”
The same app was advertised differently on dozens of adult sites: “Make deepfake porn in a sec,” the ads said. “Deepfake anyone.”
How increasingly sophisticated technology is applied is one of the complexities facing synthetic media software, where machine learning is used to digitally model faces from images and then swap them into films as seamlessly as possible.
The technology, barely four years old, may be at a pivotal point, according to Reuters interviews with companies, researchers, policymakers and campaigners.
It’s now advanced enough that general viewers would struggle to distinguish many fake videos from reality, the experts said, and has proliferated to the extent that it’s available to almost anyone who has a smartphone, with no specialism needed.
“Once the entry point is so low that it requires no effort at all, and an unsophisticated person can create a very sophisticated non-consensual deepfake pornographic video – that’s the inflection point,” said Adam Dodge, an attorney and the founder of online safety company EndTab.
“That’s where we start to get into trouble.”
With the tech genie out of the bottle, many online safety campaigners, researchers and software developers say the key is ensuring consent from those being simulated, though this is easier said than done. Some advocate taking a tougher approach when it comes to synthetic pornography, given the risk of abuse.
Non-consensual deepfake pornography accounted for 96% of a sample study of more than 14,000 deepfake videos posted online, according to a 2019 report by Sensity, a company that detects and monitors synthetic media. It added that the number of deepfake videos online was roughly doubling every six months.
“The vast, vast majority of harm caused by deepfakes right now is a form of gendered digital violence,” said Ajder, one of the study authors and the head of policy and partnerships at AI company Metaphysic, adding that his research indicated that millions of women had been targeted worldwide.
Consequently, there is a “big difference” between whether an app is explicitly marketed as a pornographic tool or not, he said.
ExoClick, the online advertising network that was used by the “Make deepfake porn in a sec” app, told Reuters it was not familiar with this kind of AI face-swapping software. It said it had suspended the app from taking out adverts and would not promote face-swap technology in an irresponsible way.
“This is a product type that is new to us,” said Bryan McDonald, ad compliance chief at ExoClick, which like other large ad networks offer clients a dashboard of sites they can customise themselves to decide where to place adverts.
“After a review of the marketing material, we ruled the wording used on the marketing material is not acceptable. We are sure the vast majority of users of such apps use them for entertainment with no bad intentions, but we further acknowledge it could also be used for malicious purposes.”
Six other big online ad networks approached by Reuters did not respond to requests for comment about whether they had encountered deepfake software or had a policy regarding it.
There is no mention of the app’s possible pornographic usage in its description on Apple’s App Store or Google’s Play Store, where it is available to anyone over 12.
Apple said it didn’t have any specific rules about deepfake apps but that its broader guidelines prohibited apps that include content that was defamatory, discriminatory or likely to humiliate, intimidate or harm anyone.
It added that developers were prohibited from marketing their products in a misleading way, within or outside the App Store, and that it was working with the app’s development company to ensure they were compliant with its guidelines.
Google did not respond to requests for comment. After being contacted by Reuters about the “Deepfake porn” ads on adult sites, Google temporarily took down the Play Store page for the app, which had been rated E for Everyone. The page was restored after about two weeks, with the app now rated T for Teen due to “Sexual content”.
While there are bad actors in the growing face-swapping software industry, there are a wide variety of apps available to consumers and many do take steps to try to prevent abuse, said Ajder, who champions the ethical use of synthetic media as part of the Synthetic Futures industry group.
Some apps only allow users to swap images into pre-selected scenes, for example, or require ID verification from the person being swapped in, or use AI to detect pornographic uploads, though these are not always effective, he added.
Reface is one of the world’s most popular face-swapping apps, having attracted more than 100 million downloads globally since 2019, with users encouraged to switch faces with celebrities, superheroes and meme characters to create fun video clips.
The U.S.-based company told Reuters it was using automatic and human moderation of content, including a pornography filter, plus had other controls to prevent misuse, including labelling and visual watermarks to flag videos as synthetic.
“From the beginning of the technology and establishment of Reface as a company, there has been a recognition that synthetic media technology could be abused or misused,” it said.
The widening consumer access to powerful computing via smartphones is being accompanied by advances in deepfake technology and the quality of synthetic media.
For example, EndTab founder Dodge and other experts interviewed by Reuters said that in the early days of these tools in 2017, they required a large amount of data input often totalling thousands of images to achieve the same kind of quality that could be produced today from just one image.
“With the quality of these images becoming so high, protests of ‘It’s not me!’ are not enough, and if it looks like you, then the impact is the same as if it is you,” said Sophie Mortimer, manager at the UK-based Revenge Porn Helpline.
Policymakers looking to regulate deepfake technology are making patchy progress, also faced by new technical and ethical snarls.
Laws specifically aimed at online abuse using deepfake technology have been passed in some jurisdictions, including China, South Korea, and California, where maliciously depicting someone in pornography without their consent, or distributing such material, can carry statutory damages of $150,000.
“Specific legislative intervention or criminalisation of deepfake pornography is still lacking,” researchers at the European Parliament said in a study presented to a panel of lawmakers in October that suggested legislation should cast a wider net of responsibility to include actors such as developers or distributors, as well as abusers.
“As it stands today, only the perpetrator is liable. However, many perpetrators go to great lengths to initiate such attacks at such an anonymous level that neither law enforcement nor platforms can identify them.”
Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center and a former member of the EU parliament, said broad new digital laws including the proposed AI Act in the United States and GDPR in Europe could regulate elements of deepfake technology, but that there were gaps.
“While it may sound like there are many legal options to pursue, in practice it is a challenge for a victim to be empowered to do so,” Schaake said.
“The draft AI Act under consideration foresees that manipulated content should be disclosed,” she added.
“But the question is whether being aware does enough to stop the harmful impact. If the virality of conspiracy theories is an indicator, information that is too absurd to be true can still have wide and harmful societal impact.”