Social media manipulation campaigns now target users with video content: report

Social media manipulation campaigns now target users with video content: report
Copyright  REUTERS/Dado Ruvic/Illustration/File Photo
Copyright  REUTERS/Dado Ruvic/Illustration/File Photo
By Euronews with Reuters
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

"Images are often more powerful than words with more potential to go viral," said Samantha Bradshaw, one of the report's author's

ADVERTISEMENT

Viral memes, videos, and pictures are fuelling organised social media manipulation campaigns on Instagram and YouTube, researchers at Oxford University said on Thursday.

In an annual report on disinformation trends, Oxford Internet Institute's Computational Propaganda Research Project said Facebook remained the most popular platform for social media manipulation due to the size of the company and its global outreach.

However, visual content shared online meant that users of Google's YouTube video platform and Facebook's Instagram photo-sharing site are being increasingly targeted with false or misleading messages, said Samantha Bradshaw, one of the report's authors.

"On Instagram and YouTube it's about the evolving nature of fake news — now there are fewer text-based websites sharing articles and it's more about video with quick, consumable content," she said. "Memes and videos are so easy to consume in an attention-short environment."

"It's easier to automatically analyse words than it is an image," Bradshaw said. "And images are often more powerful than words with more potential to go viral."

The report highlights the challenges facing tech giants like Facebook or Google in fighting the spread of political and financially-motivated disinformation.

A Facebook spokesman said showing users accurate information was a "major priority" for the company.

"We've developed smarter tools, greater transparency, and stronger partnerships to better identify emerging threats, stop bad actors, and reduce the spread of misinformation on Facebook, Instagram and WhatsApp," the spokesman said.

YouTube said it had invested in policies, resources and products to tackle misinformation on its site and regularly removes content that violates its terms of use. Reuters contacted the platform for comment but a spokesman declined to comment on Oxford University's findings.

Earlier this month, tech companies announced they were forming an alliance with the BBC to help fight the spread of disinformation on their sites. The strategy includes an "early warning system" that allows them to alert each other when they find false content.

But Bradshaw said targeting internet users with visual content would make it hard for tech giants to identify and ban the manipulated content.

Facebook and YouTube both came under criticism over their ability to ban visual content after a mass shooting in New Zealand in March when the live-stream of an attack on two mosques was shared by the gunman on Facebook and subsequently reshared by users across multiple social media platforms.

The Oxford University report said that social media manipulation campaigns had taken place in 70 countries worldwide, an increase from 28 in 2017.

"Computational propaganda has become a normal part of the digital public sphere," the report said.

"These techniques will also continue to evolve as new technologies ... are poised to fundamentally reshape society and politics."

Share this articleComments

You might also like

Antisemitism rampant online since Hamas’ attack, with YouTube recording a 51-fold surge

‘It scars you for life’: Workers sue Meta claiming viewing brutal videos caused psychological trauma

Wagner still using Facebook to recruit fighters, despite Meta saying content will be removed