Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

Scientists are publishing more than ever with AI. But not all papers measure up, study finds

Stack of work papers
Stack of work papers Copyright  Canva
Copyright Canva
By Roselyne Min
Published on
Share Comments
Share Close Button

Large language models such as ChatGPT are boosting paper production, particularly for scientists who are not native English speakers. However, many AI-written papers are less likely to pass peer review.

As scientists increasingly rely on artificial intelligence for writing, coding and even generating ideas, a new study examines how AI is reshaping academic research.

What once sounded like academic gossip now reflects a real and measurable shift in scientific publishing.

Researchers at Cornell University, United States, have found that large language models (LLMs) such as ChatGPT are boosting paper production, particularly for scientists who are not native English speakers.

However, the study warns that the growing volume of AI-assisted papers is making it harder for reviewers, funders and policymakers to distinguish meaningful scientific contributions from low quality work.

“It is a very widespread pattern, across different fields of science – from physical and computer sciences to biological and social sciences,” said Yian Yin, the corresponding author of the study and an assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science.

“There’s a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund,” Yin added.

How did researchers study the emergence of AI-assisted papers?

The study, published in the journal Science, analysed more than two million research papers posted between 2018 and 2024 on three major online preprint servers.

These platforms host early versions of scientific papers before formal peer review, offering insight into how researchers work in real time.

To assess the impact of AI on scientific writing, the researchers trained an AI system to detect text likely generated by LLMs. They compared papers written before 2023, when tools such as ChatGPT became widely used, with later papers showing clear signs of AI assistance.

Using this approach, the team identified researchers likely to be using AI tools, measured how their publication output changed, and tracked whether those papers were later accepted by scientific journals.

AI assistance leads to a surge in productivity

Their analysis showed a big AI-powered productivity bump.

Scientists who appeared to use AI tools posted far more papers than those who did not.

On one major preprint server focused on physics and computer science, AI users produced about one-third more papers. In biology and the social sciences, the increase was even larger with more than 50 percent.

The largest gains were seen among researchers whose first language is not English.

In some Asian institutions, scientists published between 40 percent and nearly 90 percent more papers after adopting AI writing tools, depending on the discipline.

AI tools also appear to help researchers find better references. The study found that AI-powered search tools were more likely to surface newer research papers and relevant books, rather than the older, frequently cited studies favoured by traditional search methods.

“People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas,” said Keigo Kusumegi, the first author of the study and a doctoral student at the Department of Information Science, Cornell University.

Quality concerns with AI-written papers

But the productivity boost comes with a downside. Many AI-written papers looked impressive on the surface yet were less likely to pass peer review.

Across all three preprint sites, papers likely written by humans that scored high on a writing complexity test were most likely to be accepted to a scientific journal.

But high-scoring papers probably written by LLMs were less likely to be accepted, suggesting that despite the convincing language, reviewers deemed many of these papers to have little scientific value.

Researchers behind the study believe the impact of the growing reliance on AI will likely broaden and that policymakers should make new rules to regulate the rapidly evolving technological landscape.

“Already now, the question is not, 'Have you used AI?' The question is, 'How exactly have you used AI and whether it’s helpful or not,'” said Yin.

Go to accessibility shortcuts
Share Comments

Read more