Verification of users’ IDs on social media would not have stopped the "abhorrent" racist abuse faced by England players following the nation's defeat in the Euro 2020 final, according to an update from social platform Twitter on Tuesday.
Football players Marcus Rashford, Jadon Sancho and Bukayo Saka were the focus of a wave of online hate across social media after they missed penalties in England's shoot-out against Italy in July.
The incident sparked mass condemnation and pressure on social media companies to do more to tackle racism on their platforms.
Britain's Chartered Institute for IT called for ID verification to be put in place on social media, while a UK government petition calling for similar measures gathered almost 700,000 signatures.
But according to Twitter, the overwhelming majority of users suspended for racist abuse during the tournament were already posting using their real names.
"Our data suggests that ID verification would have been unlikely to prevent the abuse from happening - as the accounts we suspended themselves were not anonymous. Of the permanently suspended accounts from the Tournament, 99 per cent of account owners were identifiable," the company said in a blog post.
Twitter also revealed that the UK was "by far the largest country of origin" for racist tweets taken down by the company in the wake of England's Euro 2020 defeat.
Last week, the UK's specialised Football Policing Unit (UKFPU) announced it had made 11 arrests in connection with racist abuse levelled at black England players.
“There are people out there who believe they can hide behind a social media profile and get away with posting such abhorrent comments,” Cheshire Chief Constable Mark Roberts, who leads football policing for the National Police Chiefs’ Council, said.
“They need to think again".
In all, Twitter's automated tools removed 1,622 racist and abusive tweets during the Euro 2020 final and in the following 24 hours, the company said.
Following the match, England's Bukayo Saka criticised social media companies for the abuse he and his teammates had endured.
"To social media platforms Instagram, Twitter and Facebook, I don’t want any child or adult to have to receive the hateful and hurtful messages that me, Marcus and Jadon have received this week,” the 19-year-old wrote.
"I knew instantly the kind of hate that I was about to receive and that is a sad reality that your powerful platforms are not doing enough to stop these messages".
Confirmed: Twitter algorithm preferred lighter skin
In its Tuesday announcement, Twitter also revealed it would soon trial the development of an "autoblock" feature that temporarily halts contact from accounts using abusive language.
"We are also continuing to roll out our replies prompts, which encourage people to revise their replies to Tweets when it looks like the language they use could be harmful," the company said.
Twitter's automated tools are no strangers to accusations of racism themselves. In May the company announced it was removing a photo cropping algorithm after it was suspected of discriminating in favour of people with lighter skin.
On Sunday, Swiss graduate student Bogdan Kulynyc won $3,500 (€2,990) in a competition organised by Twitter after his research proved that its cropping algorithm really did favour younger-looking, slimmer and paler faces.
Facebook-owned Instagram also announced new features aimed at blocking hate which became available to users on Wednesday.
They include the ability for users to limit the quantity of messages and comments they can receive during periods of increased attention, and stronger warnings that appear when a user attempts to use potentially offensive language.
Instagram also announced it was rolling out its "Hidden Words" feature, which allows users to filter certain terms out of their messages.