Does Facebook's new policy on 'deepfake' videos go far enough? | #TheCube

Does Facebook's new policy on 'deepfake' videos go far enough? | #TheCube
Copyright AP Photo/Martin Meissner
By The Cube
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

The social media giant announced it would remove videos which are digitally altered to make people saying fictional things.


Facebook has announced it will remove "deepfakes" and other manipulated videos from its platform if they have been edited, as part of an effort to combat misinformation ahead of the US Presidential elections.

In a blog post, the social media giant said it would take down digitally-altered videos which "would likely mislead someone into thinking that a subject of the video said words that they did not actually say".

Any banned content would also need to be the "product of artificial intelligence or machine learning".

Social media platforms have been under increasing pressure to tackle the threat of misinformation ahead of the US presidential election in November.

But the policy has been criticised for not tackling videos which are edited using less advanced manipulation methods, otherwise known as 'shallow fakes'.

Sam Gregory, Program Director at Witness, told Euronews that Facebook's new approach is only a good solution for "part of the problem of manipulated media".

'Only videos generated by artificial intelligence to depict people saying fictional things'

Facebook was criticised in May 2019 for refusing to remove an edited video of the US House of Representatives Speaker Nancy Pelosi from its platform.

The video had been slowed down to make it appear as though Ms Pelosi was slurring her speech and stumbling over her words.

At the time, Facebook defended its decision, saying it had subjected the video to a fact-checking process and had "dramatically reduced its distribution" on the social network.

The social media giant confirmed to Euronews that as part of its new policy it will not remove the edited video of the US House Speaker and that only videos generated by artificial intelligence to depict people saying fictional things will be taken down.

A spokesman for Nancy Pelosi dismissed Facebook’s the announcement, tweeting that "the real problem is Facebook’s refusal to stop the spread of disinformation".

Another Democrat, Senator Mark Warner, has added that the policy change does not go far enough.

Facebook also confirmed that the new policy would not extend to content that is "parody or satire", or videos which have been edited solely to omit or change the order of words.

But all videos would still be reviewed by independent third-party fact-checkers under Facebook policy. Content that is found to be factually incorrect would appear less prominently on the site’s news feed and would be labelled false.

"If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem," said Monika Bickert, Facebook's Vice President of Global Policy Management.

"By leaving them up and labelling them as false, we’re providing people with important information and context".

Sam Gregory believes that the new policy does not do enough to combat "lightly edited or mis-contextualised videos".

"We need to have much clearer policies about how companies like Facebook will reduce the spread of shallow fakes".


Gregory also argues that companies should be more open to users when they remove a video that is manipulated.

"Policies need to be really clear about the lines they are drawing between what they remove and what they allow on their platform".

Watch Matthew Holroyd's report in the Cube above.

Share this articleComments

You might also like

Facebook says doctored UK election video wouldn't break its advertisement rules

Twitter wants your help to combat deepfakes | #TheCube

How Yulia Navalnaya, widow of late Alexei Navalny, became the target of a disinformation campaign