Face-swapping app takes off in China, making AI-powered deepfakes for everyone

Image: The face-swapping app Zao in Shandong province, China, on Aug. 31, 2
The face-swapping app Zao in Shandong province, China, on Aug. 31, 2019. Copyright Da qing Imaginechina via AP
Copyright Da qing Imaginechina via AP
By David Ingram with NBC News Tech and Science News
Share this articleComments
Share this articleClose Button

It's as easy as using a photo filter on Instagram or Snapchat, but it also demonstrates the remarkable power of advances in artificial intelligence to make fake videos.

ADVERTISEMENT

For 30 seconds, anyone in China can now take the place of Leonardo DiCaprio in some of his most iconic roles — and all it takes is a smartphone and a bit of personal data.

The Chinese app Zao has surged in popularity over the past few days to become the country's top smartphone app, and descriptions of what it does have gone viral on social media in the U.S.

The app's appeal is simple: Upload a photo and it will swap DiCaprio's face with a user's in a 30-second mashup of clips from his films. Or it can do the same with a character from "Game of Thrones," or with a performer in a music video. The app is only available in China, though some people outside the country have been able to get around that restriction.

It's as easy as using a photo filter on Instagram or Snapchat, according to people who have used it, but it also demonstrates the remarkable power of advances in artificial intelligence to make fake videos.

Allan Xia, an artist in New Zealand who was able to download the app, said in a viral series of tweets that he was able to create a DiCaprio video using his own face in eight seconds. And he said he was quickly overwhelmed by the implications.

"Imagine logging into your Netflix or Disney account — watching the latest TV show/Marvel movie, where a stand-in actor's would be auto-replaced with your own," he tweeted.

"The newest member of the Avengers is ... YOU," he said. "I'm both excited and interested from a technologist/creator perspective and morbidly cynical from a moral one."

Phone apps that use machine learning to put an entertaining twist on selfies have proved irresistible to millions of people. Google makes an app that shows people which piece of artwork they resemble. In July, FaceApp, which allows people to see what they look like as another gender or as years older, saw a spike in popularity.

But with each app, security and privacy questions resurface, including: What is the company doing with the photos?

Zao's original user agreement said that people who upload their images had agreed to surrender the intellectual property rights to their face and allow their images to be used for marketing purposes, Reuters reported.

But after criticism from users, Zao backtracked, revising its user agreement to say it would not take ownership of the intellectual property rights to users' faces.

"We understand the concern about privacy," the company, a unit of the Beijing-based app maker Momo, said on the social network Weibo, according to Bloomberg News. "We've received the feedback, and will fix the issues that we didn't take into consideration, which will need a bit of time."

WeChat, China's ubiquitous messaging service and social media platform, banned links to Zao, citing security risks.

Baptiste Robert, a French security researcher who also goes by Elliot Alderson, said the company is still retaining information about its users. He said he deleted one of the videos he had made with Zao, but then saw the company still stored a copy of it.

"If you care about your privacy, you shouldn't use Zao," he said in an email. "You give a part of privacy against something cool. Once the cool effect is done, your privacy is gone forever."

Beyond privacy, Zao has started a debate about what will happen when more powerful face-swapping software is in the hands of billions of people.

Zao's offerings so far are limited; it gives people a choice of which celebrities they want to trade faces with, but users can't choose from any video ever produced. That may change, though, in the near future, and people are already testing the limits of deepfake software.

One video that took off this year on YouTube showed former "Saturday Night Live" star Bill Hader doing an impression of Arnold Schwarzenegger as Hader's face morphs into Schwarzenegger's — a sort of test run for how such videos will be received by the public at large.

ADVERTISEMENT

"It can't really be contained, and there's also high demand for it," said Clint Watts, a fellow at the Foreign Policy Research Institute, who testified before Congress this year about deepfake videos.

Watts, a former special agent on the FBI's Joint Terrorism Task Force, said he could imagine countless uses of the technology to mislead people, especially in countries that do not have a widely read independent press or other watchdogs. He suggested that someone might use the technology to show a fake beheading, the kind of incident that might cause a riot.

"There's no one to step in and say: 'This didn't really happen. This is a fake thing,'" he said.

Jack Clark, the policy director for OpenAI, an organization in San Francisco trying to build "safe and beneficial" artificial intelligence, said Zao's developers and everyone involved in such research should think ahead to potential downsides.

"Technology developers should be trying to anticipate the positive and negative uses of their technology and should be trying to communicate explicitly about them and what those are to more people," Clark said in an email.

ADVERTISEMENT

"It's insufficient if this is just tech companies — it requires work from individual developers, to academia, to companies," he said.

Share this articleComments

You might also like

Apple launches faster chips, MacBook Pro laptops and cheaper Airpods - what are the upgrades?

What is the metaverse and why is Facebook betting big on it?

Euronews Debates | Profit vs public good: How can innovation benefit everyone?