X is failing to moderate antisemitic and islamophobic hate speech, according to a new report

A new report by the Centre for Countering Digital Hate accuses X of failing to moderate antisemitic and islamophobic content on its platform.
A new report by the Centre for Countering Digital Hate accuses X of failing to moderate antisemitic and islamophobic content on its platform. Copyright Canva
Copyright Canva
By Aisling Ní Chúláin
Share this articleComments
Share this articleClose Button

The non-profit organisation analysed 200 examples of reported hate speech on the platform and found that 96 per cent of the posts remained online a week later.

ADVERTISEMENT

Is X, formerly Twitter, adequately moderating hate speech on its platforms as conflict rages in the Middle East?

A new report from the Centre for Countering Digital Hate (CCDH), a non-profit dedicated to countering online misinformation and hate speech, suggests the social platform is falling short in its ​​commitment to combat content "motivated by hatred, prejudice or intolerance".

The organisation reported 200 incidents of hate speech sent from 101 accounts on the platform X that related to the Israel-Palestine conflict and found that 96 per cent of the posts remained online a week later.

According to the CCDH, the posts in question were all sent in the aftermath of Hamas’ October 7 attack on Israel and included those which incited violence against Muslims, Palestinians and Jewish people, promoted antisemitic conspiracy theories and described Palestinians in Gaza as animals.

Researchers from the organisation identified the accounts by searching through the followings, likes and posts of known hateful accounts and stressed that the sample should not be seen as a "representative sample of posts relating to the Israel-Gaza crisis, but rather as a means of testing X’s moderation systems”.

The posts that remained online have accumulated 24,043,693 views with only one being suspended and a further two ‘locked’ meaning they cannot post, repost, or like content.

Forty-three of the 101 accounts in question were verified accounts that benefited from the increased visibility of their posts.

“After an unprecedented terrorist atrocity against Jews in Israel, and the subsequent armed conflict between Israel and Hamas, hate actors have leapt at the chance to hijack social media platforms to broadcast their bigotry and mobilise real-world violence against Jews and Muslims, heaping even more pain into the world,” Imran Ahmed, the group’s CEO and founder, said.

“X has sought to reassure advertisers and the public that they have a handle on hate speech – but our research indicates that these are nothing but empty words,” he added.

The new study follows a similar report by the organisation published in September that analysed a broader range of hate speech on the platform, finding that 86 per cent of 300 selected instances of reported hate speech remained online after a week.

Since Elon Musk’s controversial takeover of the platform last year, X has been heavily criticised over its moderation standards, particularly following the company’s move to lay off a majority of its workforce in November of last year.

Earlier this year, Musk pushed back against the CCDH in a post that labelled the organisation "truly evil" and claimed that they "spread disinformation and push censorship".

A couple of days later, a lawyer for Musk, Alex Spiro, sent a letter to the non-profit, accusing them of making “a series of troubling and baseless claims that appear calculated to harm Twitter generally, and its digital advertising business specifically”.

In turn, the CCDH responded by calling the threat ridiculous and describing Spiro’s missive as “a disturbing effort to intimidate those who have the courage to advocate against incitement, hate speech and harmful content online”.

X did not respond to a request for comment on this story.

Share this articleComments

You might also like