Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

AI poses ‘existential threat’ to polling, mimicking human responses almost perfectly, study finds

New research shows that AI tools can trick online systems into thinking they're human, corrupting online survey results.
New research shows that AI tools can trick online systems into thinking they're human, corrupting online survey results. Copyright  Canva
Copyright Canva
By Anca Ulea
Published on
Share Comments
Share Close Button

‘We can no longer trust that survey responses are coming from real people,’ said the lead author of a new study from Dartmouth University.

Artificial Intelligence (AI) is making it nearly impossible to distinguish human responses from bots in online public opinion polls, according to new research.

A Dartmouth University study published in the Proceedings of the National Academy of Sciences on Monday shows that large language models (LLMs) can corrupt public opinion surveys at scale.

“They can mimic human personas, evade current detection methods, and be trivially programmed to systematically bias online survey outcomes,” according to the study.

The findings reveal a “critical vulnerability in our data infrastructure,” posing a “potential existential threat to unsupervised online research,” said the study's author Sean Westwood, an associate professor of government at Dartmouth.

AI interference in polling could add another layer of complexity to crucial elections. Already, disinformation campaigns fuelled by AI have been signalled by online monitoring groups in European elections, including recently in Moldova.

Tricking the system

To test the vulnerability of the online survey software, Westwood designed and built an “autonomous synthetic respondent,” a simple AI tool that operates from a 500-word prompt.

For each survey, the tool would adopt a demographic persona based on randomly-assigned information – including age, gender, race, education, income, and state of residence.

With this persona, it would simulate realistic reading times, generate human-like mouse movements, and type open-ended responses one keystroke at a time – complete with plausible typos and corrections.

In over 43,000 tests, the tool fooled 99.8 per cent of systems into thinking it was human. It made zero errors on logic puzzles and bypassed traditional safeguards designed to detect automated responses, like reCAPTCHA.

"These aren't crude bots," Westwood said. "They think through each question and act like real, careful people making the data look completely legitimate".

Can we still trust survey results?

The study examined the practical vulnerability of political polling, taking the 2024 US presidential election as an example.

Westwood found that it would only have taken 10 to 52 fake AI responses to flip the predicted outcome of the election in seven top-tier national polls – during the crucial final week of campaigning.

Each of these automated respondents would have cost as little as 5 US cents (4 euro cents) to deploy.

In tests, the bots worked even when programmed in Russian, Mandarin, or Korean – producing flawless English answers. This means they could easily be exploited by foreign actors, some of whom have the resources to design even more sophisticated tools to evade detection, the study warned.

Scientific research also relies heavily on survey data – with thousands of peer-reviewed studies published every year based on data from online collection platforms.

“With survey data tainted by bots, AI can poison the entire knowledge ecosystem,” said Westwood.

His study argues that the scientific community urgently needs to develop new ways to collect data that can’t be manipulated by advanced AI tools.

"The technology exists to verify real human participation; we just need the will to implement it," Westwood said.

"If we act now, we can preserve both the integrity of polling and the democratic accountability it provides".

Go to accessibility shortcuts
Share Comments

Read more