Find Us

AI chatbots intentionally spreading election-related disinformation, study finds

Study found AI chatbots have been 'intentionally' giving misleading answers to election queries.
Study found AI chatbots have been 'intentionally' giving misleading answers to election queries. Copyright Canva
Copyright Canva
By Anna Desmarais
Published on
Share this articleComments
Share this articleClose Button

An updated report has concluded that most tech companies have not made enough changes to their platforms to stop them spreading false information.


Europe’s most popular artificial intelligence (AI) chatbots are now intentionally spreading election-related disinformation to its users, an updated study has found. 

Democracy Reporting International (DRI) examined how chatbots responded to questions related directly to the electoral process with Google Gemini, OpenAI’s ChatGPT4, ChatGPT4-o, and Microsoft’s Copilot. 

From May 22-24, researchers asked the chatbots five questions in 10 EU languages, including how a user would register to vote if they live abroad, what to do to send a vote by mail and when the results of the European Parliament elections will be out. 

"We titled our last study 'misinformation'… we have changed the category now to 'disinformation,' which implies a level of intention," the report reads. 

"Once a company has been made aware of misinformation but fails to act on it, it knowingly accepts the spread of false information". 

The revised study is the extension of an April report released by DRI that concluded that chatbots were unable to “provide reliably trustworthy answers” to typical election-related responses. 

"When you ask [AI chatbots] something for which they didn't have a lot of material and for which you don’t find a lot of information for on the Internet, they just invent something," Michael-Meyer Resende, Executive Director of DRI, told Euronews Next at the time.

Gemini, Copilot refuses answers

The testing showed that Gemini refused to respond to any questions on the electoral process in any of the languages tested, the report said. 

"We consider this a responsible way to address this problem, as it is better for users to get no information (and look elsewhere) than incorrect information about elections," the study continued. 

DRI’s April report found that Google’s Gemini had the worst performance for providing accurate and actionable information, as well as the highest number of refusals to respond.

Google previously told Euronews Next it had introduced further restrictions on how much Gemini could answer election-related questions in all 10 languages used in this study. 

Microsoft’s Copilot refused to answer some questions, notably in English and Portuguese, and referred searchers to Bing, their search engine, for viable information instead. 

There was "no discernable pattern" in the other languages for when Copilot would attempt or refuse to answer. To researchers, it shows Microsoft has put some effort into limiting hallucinations by their chatbot, but the restriction does not extend to all EU languages. 

When it did answer, researchers found over one in three of their answers to be partially or completely incorrect.  

Researchers sometimes observed the same issues as in their last study, like when Copilot didn’t mention in Polish that nationals who live abroad can register to vote for their country’s MEPs. 

Another example told Greek voters to register in order to vote, when in fact, all citizens are automatically registered. 

In a previous statement to Euronews Next, Microsoft outlined its actions ahead of the European elections, including a set of election protection commitments that "help safeguard voters, candidates, campaigns and election authorities".

Within these commitments is providing voters with “authoritative election information” on Bing.


"While no one person, institution or company can guarantee elections are free and fair, we can make meaningful progress in protecting everyone’s right to free and fair elections," Microsoft’s statement read. 

OpenAI should 'urgently retrain' chatbots

Both ChatGPT4 models rarely refused to answer questions, leading to higher rates of incorrect or partially correct answers than their chatbot competitors. The mistakes they made were often small but important details. 

OpenAI does not seem to have made any attempts to avoid electoral disinformation.
Democracy Reporting International

In one case, ChatGPT referred Irish voters to one specific form to fill out in person in order to vote, when in reality, they are available online and the type of form you need depends on "personal context and status". 

"By only focusing on one form, the chatbots provided an incomplete picture of the process of registration in Ireland," the study found. 

Researchers say ChatGPT could have "[provided]… more general information" instead of detailed answers to users’ questions to be more accurate. 


A disclaimer to verify information provided by ChatGPT by local election was sporadically added to the responses, the report continued. 

"OpenAI does not seem to have made any attempts to avoid electoral disinformation," the study concludes. 

"It should urgently retrain its chatbots to prevent such disinformation," the report continued. 

OpenAI explains on it's website that its approach to elections-related content is to "continue platform safety work by elevating accurate voting information," and improving their company’s transparency.

Share this articleComments

You might also like