Twitter is removing its photo crop algorithm that prefers white people and women

Twitter is removing an algorithm that discriminates based on gender and race.
Twitter is removing an algorithm that discriminates based on gender and race. Copyright Canva
Copyright Canva
By Tom Bateman
Share this articleComments
Share this articleClose Button

A team of researchers found the social media giant’s auto-cropping tool most strongly discriminated in favour of white women.

ADVERTISEMENT

New analysis released by Twitter has confirmed that the firm's automatic photo cropping algorithm discriminates on the basis of ethnicity and gender.

If presented with an image featuring a black man and a white woman, the algorithm would choose to show the woman 64 per cent of the time and the man 36 per cent of the time, Twitter's researchers found.

In comparisons of men and women, there was an 8 per cent difference in favour of women. The algorithm also showed an overall 4 per cent bias toward showing images of white people instead of black people.

In response, the social network said it would remove the feature, replacing it with new tools that would allow users to see a "true preview" of images added to tweets.

"One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people," Twitter's Director of Software Engineering Rumman Chowdhury wrote in a blog announcing the findings.

A similar test for the "male gaze," which aimed to discover whether the algorithm tended to focus on different parts of male and female-presenting bodies, found no evidence of bias.

Computer vision algorithms are known to depict people with darker skin tones as more violent and closer to animals, building on old racist tropes.
Nicholas Kayser-Bril
Data journalist, Algorithmwatch

When applied to technologies like facial recognition, the consequences of biased algorithms could reach far beyond an unfairly-cropped photo, according to Nicholas Kayser-Bril from Berlin-based NGO Algorithmwatch.

"Computer vision algorithms are known to depict people with darker skin tones as more violent and closer to animals, building on old racist tropes. This is very likely to have a direct effect on racialised people when such systems are used to detect abnormal or dangerous situations, as is already the case in many places in Europe," he told Euronews.

How does Twitter's algorithm work?

Until recently, images posted to Twitter have been cropped automatically by an algorithm trained to focus on "saliency" – a measure of the likelihood that the human eye will be drawn to a particular part of an image.

High saliency areas of an image typically include people, text, numbers, objects and high-contrast backgrounds.

However, a Machine Learning (ML) algorithm like the one used by Twitter is only as unbiased as the information it’s trained with, explained Kayser-Bril.

"If a Machine Learning algorithm is trained on a data set that does not contain data about certain groups or certain attributes, it will output biased results," he told Euronews.

"Building a fair data set is impossible if it is to apply to a society that is not fair in the first place. Therefore, what a model optimises for is more important than the data set it was trained with.

"Artificial Intelligence communities use benchmarks for their algorithms; they are very rarely related to fairness".

Twitter's 2021 inclusion and diversity report showed that its employees worldwide were 55.5 per cent male, 43.6 per cent female and less than one per cent nonbinary.

Figures for Twitter employees in the United States – the only territory for which the company publishes ethnicity statistics – show that 7.8 per cent of staff are black, falling to 6.2 per cent when only employees working in technical roles are taken into account.

The issue with Twitter's cropping algorithm caught widespread attention last year, when Canadian PhD student Colin Madland noticed that it would consistently choose to show him rather than his colleague – a black man – when presented with pictures of the two men.

Madland's tweet about the discovery went viral, prompting other users to post very long images featuring multiple people in an effort to see which one the algorithm would choose to show.

ADVERTISEMENT

At the time, Twitter spokesperson Liz Kelley said the company "tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing," adding it was clear that Twitter had "more analysis to do". 

Share this articleComments

You might also like