Twitter wants your help to combat deepfakes | #TheCubeComments
Twitter is the latest social platform to announce new policies to address and combat the rise of deepfake videos.
Deepfakes, or "synthetic and manipulated media", are videos that have been realistically altered using artificial intelligence, and have gained significant attention for spreading altered information.
But in an unusual move, Twitter is seeking feedback from its users and other experts before enacting any new strategies.
"We need to consider how synthetic media is shared on Twitter in potentially damaging contexts," the company said in a thread.
"We want to listen and consider your perspectives in our policy development process. We want to be transparent about our approach and values."
Vijaya Gadde, Twitter’s Chief Legal Officer, said: "We think that a lot of people have a lot of interest in this space, and have a lot of thoughts on how we should be dealing with this content."
But Twitter has not confirmed when they will implement subsequent policy changes.
Sam Gregory, programme director at Witness, praised the initiative to consult the public on monitoring deepfakes but stressed that more action is needed.
“Deepfakes are mobilised to get journalists to share false information and to deceive them … so having journalists who use Twitter respond is a good thing, but it is not enough.
“I think it’s critical that these discussions are global and include those people who have already been harmed.”
Twitter confirmed changes in policy would focus on policing content which could “threaten someone’s physical safety or lead to offline harm”.
In 2018, the social media giant banned pornographic deepfake videos, many of which were edited to feature a celebrity’s face.
Gregory told Euronews the company needs to carefully explain the current problem of deepfakes to avoid creating unnecessary panic.
“Twitter need to make this policy holistic and explain more about the tools they are building, how they are going to share data with investigators and how they are going to explain what they see to the public”.
The announcement of a change in policy suggested to Sam Gregory that Twitter was now “catching up” on Facebook.
Meanwhile, Facebook announced they would be collaborating with Amazon Web Services and academic experts to accelerate the creation of new tools which detect manipulated videos and images on their platform.
Facebook CEO Mark Zuckerberg has previously stated that the social network was "evaluating" potential policies against misinformation ahead of the 2020 US Presidential election
In a blog post, Amazon announced they would be contributing up to $1 million (more than €897,00) in AWS credits to the‘ Deepfake Detection Challenge’ over the next two years.
Speaking at the Wall Street Journal’s Tech Live conference, Facebook’s chief technology officer, Mike Schroepfer, said: “There’s a bunch of advancing technology in making deepfakes but not a lot of good technology in identifying them right now.”
On Monday, Schroepfer said that Facebook had released a set of 5,000 deepfakes that partners could study as part of the new project.
Technology giant Microsoft and academics from the University of Oxford have also joined the initiative.
In a statement, Oxford University Professor Philip H.S. Torr described manipulated media as “a fundamental threat to democracy, and hence freedom”.
However, the project has faced opposition with some questioning whether malicious actors could exploit Facebook’s dataset and the code that participants create.
Facebook has previously been criticised for its struggles in policing deepfake videos and misinformation on its platform. In May, a digitally altered video of Nancy Pelosi, the US House of Representatives Speaker went viral, which was slowed to make it appear as though she was slurring.
Facebook eventually downgraded the video to prevent it from being shared, but did not remove it from the platform.
Mark Zuckerberg has himself been edited in a video to give a powerful speech about his control of users’ data in a deepfake created by the Israeli company Canny AI.
Click on the player above to watch Alex Morgan's report in The Cube.