White nationalism hearing hit by wave of hate speech on YouTube

Image: White nationalists carry torches on the grounds of the University of
White nationalists march at the University of Virginia on the eve of a "Unite the Right" rally in Charlottesville in 2017. Copyright Alejandro Alvarez Reuters file
Copyright Alejandro Alvarez Reuters file
By Ben Collins and Jason Abbruzzese with NBC News Tech and Science News
Share this articleComments
Share this articleClose Button

The hearing comes as the role of technology companies in the hosting and spread of hate speech has come under renewed scrutiny after a gunman killed 50 people at two mosques in New Zealand.

ADVERTISEMENT

A House Judiciary Committee hearing Tuesday about the rise of white nationalism unleashed a wave of online hate speech, prompting YouTube to turn off chats on livestreams of the hearing.

"Due to the presence of hateful comments, we disabled comments on the livestream of today's House Judiciary Committee hearing," YouTube announced on its Twitter account.

Several livestreams of the hearing on YouTube were bombarded with racist and anti-Semitic posts in the platform's live chat feature just moments after the hearing began. Those chats, which were hosted on the YouTube streams of news services such as PBS and the official stream for the House Judiciary committee, appeared unmoderated for all users.

Even though YouTube said the comments feature for the videos was eventually disabled, comments and chat features on other YouTube streams remained live through the morning. Those transcripts were riddled with racist and anti-Semitic abuse.

The hearing featured a panel of witnesses from civil rights organizations who testified on the rise of white nationalism and race-related violence, as well as the role technology platforms have played in the spread of hate speech. While much of the testimony focused on statistics around anti-Semitism and anti-black violence, other witnesses pushed back on the assertion that white nationalism was a problem on the far right, arguing instead that the far left was fueling hateful rhetoric.

In early testimony, no lawmakers suggested new laws or regulations to crack down on the rise of hate speech on the internet.

Some far-right YouTube channels raised money off of the hearing. RedIce, a channel "focusing on issues concerning European survival," hosted the stream on YouTube under the title "House Judiciary committee Hearing on Criminalizing Nationalism for White People." Some users donated to the channel through YouTube's donation system, one adding a white nationalist slogan along with their contribution.

The hearing comes as the role of technology companies in the hosting and spread of hate speech has come under renewed scrutiny after a gunman killed 50 people at two mosques in New Zealand last month. The gunman livestreamed the killings to Facebook, and the video then circulated on the social network and other social media platforms. The gunman also said he had been radicalized online.

Facebook later changed its policies and banned white nationalism, which had previously been permitted on its service.

While the companies rushed to remove the video, the event galvanized international criticism of tech companies for not doing enough to crack down on hate speech. Outside the United States, politicians from the United Kingdom and Canada have openly criticized big tech companies and said they are considering laws that would hold the platforms accountable for the spread of harmful content. Australia recently passed laws that include possible jail time for executives of social media companies if violent content is not removed quickly.

Social media companies grew quickly in the late 2000s and early 2010s under the principle that they were not responsible for the content posted to their platforms. Companies including Facebook, Twitter and Google-owned YouTube tried to remove some content, such as child pornography and extreme violence, but allowed the vast bulk of their users to operate with few limitations.

That began to change in the mid 2010s when the use of social media by the Islamic State group to spread militant propaganda and recruit people came under scrutiny. Tech companies have since claimed that they are able to find, identify and remove most content from Islamist extremists — leading to calls for similar action to be taken against white nationalists.

This story is developing. Please check back for updates.

Share this articleComments

You might also like

Apple launches faster chips, MacBook Pro laptops and cheaper Airpods - what are the upgrades?

What is the metaverse and why is Facebook betting big on it?

Euronews Debates | Profit vs public good: How can innovation benefit everyone?