A report analysed 3.5 million social media posts spreading disinformation ahead of the EU elections.
Russian bots spreading disinformation may have reached as many as 241 million Europeans, almost half of the EU population, according to a new report that analysed roughly 3.5 million social media posts.
The disinformation came from 6,700 “bad actors”, Russian-associated automated bots and human actors posting content that amplified existing societal differences and trolled EU politicians.
Germany, France, and the UK were targeted with “a disproportionate amount of bad actor-based messaging, relative to their MEP representation,” the report said.
“If you absorb information through electronic means, all of them have been successfully poisoned in one way or another,” Otavio Freire, co-founder SafeGuard Cyber, the US firm that conducted the investigation said in an interview with Euronews.
The release comes days after Facebook announced that they had removed 21 Russian Facebook pages and Instagram accounts spreading coordinated information about Austria, the Baltics, Germany, Spain, Ukraine, and the UK. They removed another 97 Russian accounts that spread information about Ukraine.
As part of the analysis, the firm reviewed posts from November 2018 to March 2019 and published a snapshot of the investigation that covers ten days in early March.
The company said that their database contains over 500,000 known “troll and bot accounts,” but Euronews was unable to independently verify the database or whether or not they were indeed Russian accounts.
The authors said they had a high level of confidence that the accounts were Russian, but that “attribution is hard”.
Amplifying existing content
The group found that Russian-linked “bad actors” used automated bots and human actors to troll EU politicians and amplify existing societal fissures, a technique the report claimed, is new to the Russians since 2016.
“In lieu of creating their own content from scratch, they now appear to spend more effort amplifying existing content that is being created organically,” the report says.
In an example from Germany, a far-right Alternative for Germany politician’s tweet was retweeted by an account that posted 2.3 times per second for hours at a time. One Russian-linked bad actor also posted content about terrorism and Islam, using hashtag abbreviations for Angela Merkel’s party and colleagues to associate them with the post.
The Atlantic Council’s Donara Barojan, who is not connected to the report or the firm, analyses online disinformation campaigns in the US and Europe and said amplifying differences is a technique investigators have been seeing over the past couple of years.
“Hostile actors are becoming more reliant on content generated by real users, which is then amplified by inauthentic networks,” she wrote in an email to Euronews. The shift to “relying on videos and posts generated by users like you and me means it will only become more difficult to identify influence campaigns because they will look more organic.”
Mobilising around real events and people
Bad actors also mobilised around real events to shape public perception.
On March 4, French President Emmanuel Macron published an article about his ideas on Europe, after which “bad actor activity increased 79%,” sharing content against Macron.
And after a "yellow vests" protest on March 9, the investigators found that bot and troll accounts increased activity by 62%, posting content that claimed there was a lack of media coverage and that the movement was being silenced.
By running official EU politician Twitter accounts through an “analytics engine”, the SafeGuard Cyber analysis determined how many “bad actor” followers the politicians had. For instance, 12% of EU Commission President Jean-Claude Juncker’s followers could be classified as “bad actors,” according to the report.
“You can only fight it with matching technology,” firm co-founder Freire said. “It’s not traditional warfare.”
The Atlantic Council’s Barojan said that just because someone scrolled past Russian-linked content does not mean they engaged with it.
“The real challenge is those underlying divisions that make us easy to exploit,” she told Euronews. “As important as fact-checking is, addressing those vulnerabilities is just as vital.”