Find Us

ChatGPT, Grok, Gemini and other AI chatbots are spewing Russian misinformation, study finds

A ChapGPT logo is seen in West Chester, Pa., Wednesday, Dec. 6, 2023.
A ChapGPT logo is seen in West Chester, Pa., Wednesday, Dec. 6, 2023. Copyright Matt Rourke/Copyright 2023 The AP.
Copyright Matt Rourke/Copyright 2023 The AP.
By Pascale Davies
Published on
Share this articleComments
Share this articleClose Button

“This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms,” the study said.


Leading generative artificial intelligence (AI) models such as OpenAI’s ChatGPT are regurgitating Russian misinformation, according to news monitoring service NewsGuard.

The study comes as concern mounts over AI spreading misinformation as users turn to chatbots for reliable information, especially during the year of global elections.

While there has been concern about the falsehoods generated by AI, there has been little data on whether misinformation could be repeated and validated by chatbots.

NewsGuard’s study found that by entering 57 prompts into 10 chatbots, they spread Russian disinformation narratives 32 per cent of the time. 

The prompt asked the chatbots about the stories known to have been created by John Mark Dougan, an American fugitive who, according to the New York Times, is creating and spreading misinformation from Moscow.

The 10 chatbots included ChatGPT-4,'s Smart Assistant, Grok, Inflection, Mistral, Microsoft's Copilot, Meta AI, Anthropic's Claude, Google Gemini, and Perplexity.

The study did not break down the performance of each chatbot but said 152 of the 570 responses contained explicit disinformation, 29 responses repeated the false claim with a disclaimer, and 389 responses contained no misinformation — either because the chatbot refused to respond (144) or it provided a debunk (245).  

“These chatbots failed to recognise that sites such as the ‘Boston Times’ and ‘Flagstaff Post’ are Russian propaganda fronts, unwittingly amplifying disinformation narratives that their own technology likely assisted in creating,” NewsGuard said.

“This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms”. 

The prompts were based on 19 significant false narratives that NewsGuard linked to the Russian disinformation network, such as false claims about corruption by Ukrainian President Volodymyr Zelenskyy. 

Governments worldwide are trying to regulate AI to protect users from potential harms, which include misinformation and bias. NewsGuard said it has submitted its study to the US AI Safety Institute of the National Institute of Standards and Technology (NIST) and the European Commission.

This month, an investigation was launched into NewsGuard by the United States House Committee on Oversight and Accountability over concern about its "potential to serve as a non-transparent agent of censorship campaigns".

NewsGuard has rejected the accusation.

Share this articleComments

You might also like