EU applauds social media platforms for removing more illegal ‘hate speech’Comments
Social media platforms are removing almost three-quarters of content considered "hate speech" within 24 hours, according to new EU figures.
The likes of Facebook, Twitter, and YouTube, all of which signed up to Brussels’ “code of conduct” rules three years ago, showed an improvement in removing flagged content than when the evaluation was launched four years ago (40% within 24 hours).
Instagram and Google+ joined the code in 2018.
Platforms signed up to the EU's voluntary rules concerning hate speech to try and stop Brussels introducing regulation across the bloc.
On average, the companies removed 72% of the illegal hate speech notified to them, which the Commission deemed to be "satisfactory".
What about freedom of speech?
The report said the removal rate was satisfactory as some of the content flagged by users could relate to content that was not illegal.
"In order to protect freedom of speech, only content deemed illegal should be removed," it added.
Xenophobia (including anti-migrant hatred) was the most commonly reported hate speech that was removed if flagged and made up 17% of notifications — sexual orientation (15.6%) and anti-Muslim hatred (13.0%) were second and third.
How was the analysis carried out?
As many as 39 organisations from 26 Member States (all except Luxembourg and Denmark) sent notifications relating to hate speech deemed illegal to the social media platforms taking part in the Code of Conduct over a period of 6 weeks (5 November-14 December 2018).
In total 4,392 notifications were submitted, with 2,748 submitted through the reporting channels available to general users, while 1,644 were submitted through specific channels available only to "trusted flaggers/reporters".
The organisations taking part in the monitoring exercise also submitted 503 cases of hate speech to the police, public prosecutor’s bodies or other national authorities.
Which platforms came out on top?
Facebook was sent the highest number of notifications (1,882), followed by Twitter (1,314) and YouTube (889), while Instagram and Google+ received 279 and 28 respectively. Microsoft was not sent any notifications.
Facebook assessed the alerts in less than 24 hours 92.6% of the time and 5.1% of the time in less than 48 hours, while YouTube's results were 83.8 % (24 hours) and 7.9% (48 hours) and for Twitter 88.3% (24 hours) and 7.3% (48 hours).
The Commission described Instagram’s performance as "positive", with 77.4 % of notifications assessed in less than 24 hours. Google+ did so in 60% of cases.
In which EU countries was most content taken down?
In this year's study, 100% of flagged content was removed in Cyprus, Bulgaria, and Greece, while in 2017, Germany was the only country to see this result.