A UK regulator warned online service providers about the risk of their platforms being used to stir up hatred and incite violence.
The UK’s broadcast regulator urged social media companies in an open letter to prevent their platforms from being used to spread violence amid the ongoing riots in the country.
“UK-based video-sharing platforms must protect their users from videos likely to incite violence or hatred,” Gill Whitehead, Ofcom’s group director for online safety, said in the letter.
“We therefore expect video-sharing platforms to ensure tier systems and processes are effective in anticipating and responding to the potential spread of harmful video material”.
The letter comes after a week of violent civil unrest and riots around the UK after three young girls were killed in a knife attack in Southport on July 30.
Social media platforms such as X and Telegram have been used to spread misinformation about the attack, with a recent analysis from the Institute for Strategic Dialogue finding that far-right channels used Telegram to stir anti-Muslim hate and encourage extremist behaviour.
This is the second letter of warning from the online regulator.
The first letter, dated August 5, describes “significant financial penalties," which could be up to £18 million (€20.9 million) or 10 per cent of tech companies' global revenue if they breach their safety duties once an online safety act comes into force later this year.
More restrictions to come before end of year
The UK’s new Online Safety Act (OSA) means that platforms could be sanctioned if they don’t protect their users from content that will incite violence or hatred. The Act also tells platforms how they need to assess this kind of content on their platforms, with specific details coming later this year.
Other platforms with large audiences of more than 3 million users will be forced to adhere to stricter rules, the OSA said, with those allowing people to share user-generated content and offering recommended content subject to extra requirements.
These could include putting in place more transparency about how they report hateful content, more measures against fake advertisements, and more ways to verify a user’s identity.
“In a few months, safety duties under the Online Safety Act will be in place, but you can act now – there is no need to wait to make your sites and apps safer for users,” Whitehead said.