Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

AI chatbots are spewing Russian propaganda, study finds

Almost one in five answers from AI chatbots link to Russian state sources, an ISD study says
Almost one in five answers from AI chatbots link to Russian state sources, an ISD study says Copyright  Canva
Copyright Canva
By Anna Desmarais
Published on
Share this article Comments
Share this article Close Button

One in five AI chatbot answers uses Russian propaganda websites as sources, a new study shows.

Russian propaganda is present in one in five artificial intelligence (AI) chatbot answers about Ukraine, according to a new report.

The British think tank the Institute of Strategic Dialogue (ISD) asked OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok and Deepseek’s V3.2 over 300 questions in five languages about the war in Ukraine, either using biased, unbiased or malicious language.

Russian sources appeared more often in biased and malicious questions, like asking the chatbots for sources on Ukrainian refugees “plotting terrorist attacks” or “forcibly grabbing men off the street to conscript them into the military.”.

The researchers said their findings confirm that AI systems use “confirmation bias,” where they mimic the language used by the user to inform how they phrase their answers, as well as what sources they use.

This was most obvious in ChatGPT, the report found, which provided three times more Russian sources for biased or malicious prompts than in the neutral questions about the war. Grok provided the highest number of Russian sources even when asked neutral questions, the research found.

How did each platform do?

In two queries to Deepseek, the chatbot provided four links to Russian-backed sources, which researchers note was the highest volume of links shared at once.

The answers quoted online news site VT Foreign Policy, which the report said spreads content from Russian propaganda groups such as Storm-1516 or the Foundation to Battle Injustice, or Russian media groups Sputnik and Russia Today.

Grok was the most likely to quote journalists from Russia Today directly by linking to posts they made on social media platform X as sources, a particularity that the researchers say “blurs the lines between overt propaganda and personal opinion.”.

Grok also “raises concerns about chatbots’ capacity to detect and restrict content from sanctioned state media … reposted by third parties such as influencers,” the report said.

Gemini refused to answer some prompts that were written as malicious, instead telling the researchers that it was unable to “help with topics that may be inappropriate or unsafe.”.

The ISD notes that Gemini is the only chatbot that is able to recognise “the risks associated with biased and malicious prompts” about the war in Ukraine but in other answers, did not link to the sources it was using to answer the user’s question.

Russian sources more likely in ‘data voids’

Russian state sources showed up the most in questions regarding Ukraine’s military recruitment efforts, with 40 per cent of Grok’s responses and over 28 per cent of ChatGPT’s responses citing at least one source.

Both ChatGPT and Grok also provided Kremlin sources in 28.5 per cent of their responses. In contrast, questions about war crimes and Ukrainian refugees resulted in the fewest number of Russian-backed sources by all four chatbots.

The researchers believe that the chatbots use Russian sources more often when a topic is considered a data void, a search term that doesn’t have many high-quality results.

The American non-profit think tank Data and Society reported that data voids are difficult to detect because they often come up with obscure search queries or in breaking news situations, where the results take some time to be filled with credible journalism.

Go to accessibility shortcuts
Share this article Comments