A browser extension powered by AI can reduce how people feel about opposing political views, according to new research that looked into the 2024 US presidential election.
Researchers in the United States have developed a new tool that allows independent scientists to study how social media algorithms affect users—without needing permission from the platforms themselves.
The findings suggest that platforms could reduce political polarisation by down-ranking hostile content in their algorithms.
The tool, a browser extension powered by artificial intelligence (AI), scans posts on X, formerly Twitter, for any themes of anti-democratic and extremely negative partisan views, such as posts that could call for violence or jailing supporters of an opposing party.
It then re-orders posts on the X feed in a “matter of seconds,” the study showed, so the polarising content was nearer to the bottom of a user’s feed.
The team of researchers from Stanford University, the University of Washington, and Northeastern University then tested the browser extension on the X feeds of over 1,200 participants who consented to having them modified for 10 days in the lead-up to the 2024 US presidential election.
Some of the participants used the browser extension that showed more divisive content, and the rest used the one that demoted it to a lower position on the feed. The results were published in the journal Science on Thursday.
A new way to rerank 'without platform collaboration'
The researchers asked participants to rate their feelings about the opposing political party on a scale of 1 to 100 during the experiment.
For the participants whoused the browser tool, their attitudes towards the opposing party improved on average by two points, which is the estimated change in attitude from the American public in three years.
"These changes were comparable in size to 3 years of change in United States affective polarisation," the researchers noted.
The results were bipartisan, meaning the effects were consistent across party lines for people with liberal and conservative views.
Tiziano Piccardi, assistant professor of computer science at Johns Hopkins University, said the tool has a “clear” impact on polarisation.
“When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,”he said in a statement.“When they were exposed to more, they felt colder”.
The researchers note that this could be a new way of reranking social media accounts “without platform collaboration”.
“These interventions may result in algorithms that not only reduce partisan animosity but also promote greater social trust and healthier democratic discourse across party lines,” the study concluded.
The study also looked into emotional responses and found that participants who reduced hostile content reported feeling less angry and sad while using the platform. But the emotional effects didn’t continue after the study ended.
The researchers wrote that their study was only accessible to those logged in to X on a browser, not an app, which could limit the effects.
Their study also did not measure the long-term impact that seeing less polarising content could have on X users.