Zuckerberg says Facebook looking at ways to police deepfake content

Facebook CEO Mark Zuckerberg n April 11, 2018.
Facebook CEO Mark Zuckerberg n April 11, 2018. Copyright REUTERS/Aaron P. Bernstein
Copyright REUTERS/Aaron P. Bernstein
By Alice Tidey with Reuters
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

The social media platform was criticised last month for not removing an altered video of US House Speaker Nancy Pelosi, which had been slowed so she appeared to be slurring.

ADVERTISEMENT

Facebook CEO Mark Zuckerberg said on Wednesday that the company is looking at ways to police deepfake videos shared on the social media platform.

Facebook was criticised last month for not taking down an altered video of US House Speaker Nancy Pelosi, which had been slowed down so that she appeared to be slurring and tripping on her words.

Unlike Youtube, which removed the video from its platform citing policy violations, Facebook chose to alert its users trying to share the content that the video could be misleading.

"It took a while for our system to flag the video and for our fact checkers to rate it as false... and during that time it got more distribution than our policies should have allowed," Zuckerberg conceded during a conference in Aspen, Colorado.

He added that Facebook is consulting with experts to determine what its "deepfake policy should be."

"This is a certainly a really important area as the AI (artificial intelligence) technology gets better, and one that I think it is likely sensible to have a different policy and to treat this differently than how we just treat normal false information on the internet," he said.

Deepfakes are videos or audio files that have been altered with AI — such as superimposing someone's face over another person's — to appear genuine.

READ MORE: EXPLAINER | Deepfakes threat - who is behind the new AI phenomenon?

Robert Chesney from the University of Texas and Danielle Keats Citron from the University of Maryland warned in a paper last year that "the risks to our democracy and to national security are profound."

"Credible yet fraudulent audio and video will have a much-magnified impact, and today's social media-oriented information environment interacts with our cognitive biases in ways that exacerbate the effect still further," they wrote.

The director of the US National Intelligence, Daniel R. Coats, also warned against such altered content in his latest annual Worldwide Threat Assessment report.

"Adversaries and strategic competitors will attempt to use deepfakes or similar machine-learning technologies to create convincing — but false — image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners," he said.

Share this articleComments

You might also like

French charity publishes deepfake of Trump saying 'AIDS is over'

‘It scars you for life’: Workers sue Meta claiming viewing brutal videos caused psychological trauma

Wagner still using Facebook to recruit fighters, despite Meta saying content will be removed