ChatGPT: Is it possible to detect AI-generated text?

ChatGPT can generate convincing text, but that doesn't mean what it says is factual.
ChatGPT can generate convincing text, but that doesn't mean what it says is factual.   -   Copyright  Canva
By Sophia Khatsenkova

A few weeks after the launch of ChatGPT, a powerful artificial intelligence (AI) chatbot, Darren Hick said he caught one of his students cheating by submitting a robot-generated essay.

The new technology, released by OpenAI and openly available to the public, can pretty much do anything. 

Type a request and it can write a persuasive school essay, compose a song or even crack a silly knock-knock joke within a few seconds. 

Although it can write convincingly, it doesn't mean that what it says is true. 

That's what set off alarm bells for Hick, an assistant professor of philosophy at Furman University, in the US state of South Carolina. 

"The first red flag that popped up was that my student was talking about a philosopher in a way I wouldn’t expect the class to know," he told Euronews.

"The final red flag was how well-written it was. The essays that are plagiarised are pretty poorly written - the voice changes, the structure doesn’t work. It was an odd combination of flags I had never seen before," he said. 

The software has caused widespread concern when it comes to ethical issues, especially cheating in academia.

But even more worrying, the AI has been used to generate realistic-sounding fake news articles, sparking fears that it could be misused to influence elections, for example. 

How can you tell if an AI wrote the text you're reading?

A few things can make you suspicious.

If several people ask ChatGPT exactly the same question, it will generate nearly the same answer for each of them. 

So, if you’re a teacher and you’re correcting several assignments that have the same construction or the same examples or reasoning, then it might be a text generated by AI.

Another clue to look out for is how the AI responds to recent events, according to Muhammad Abdul-Mageed, Canada Research Chair in Natural Language Processing and Machine Learning at the University of British Columbia.

"The point of weakness of ChatGPT is that it’s trained on outdated data from 2021 or 2022. It will not be able to detect something that happened recently," he told Euronews.

"For example, it would not be able to tell you if France did or did not win the Qatar World Cup. That type of factual information is hard for the model to tell". 

An app to 'detect' the use of ChatGPT

Others are developing models to detect whether a text has been written by AI. It's the case of Edward Tian, a 22-year old university student who claims to have created the app GPTZero that can tell whether ChatGPT has been used.

But none of these methods are foolproof because no detection model exists today, according to Irene Solaiman, policy director at AI start-up Hugging Face.

It’s like trying to chase a moving target. Every time you come up with something to detect the model, there is an even better model
Muhammad Abdul-Mageed
Assistant professor, University of British Columbia

"There is no magical solution to AI detection. Just like humans, as these models become more powerful, these detection models are playing catch-up and they’re not going to be as good," she told Euronews.

For Abdul-Mageed, working on detection models is like a game of whack-a-mole.

"It’s like trying to chase a moving target. Every time you come up with something to detect the model, there is an even better model. I don't think this race is the best way to solve this," he said.

Although many in the AI community have lauded the potential educational benefits of ChatGPT, the prospect of AI slipping into the classroom terrifies Hick, the university professor.

"The problem with this software is it's designed to get better. It’s trained on a broad data set. If the data set is expanded, these little red flags that I noticed could be gone in one year. The genie is out of the bottle and this is scary," he told Euronews. 

Experts believe that with artificial intelligence becoming increasingly powerful, it may be time to rethink the way the education system tests students or different policies that lawmakers could implement to prevent AI tools from causing real-world harm. 

"A lot of policies aren’t updated to integrate these models. Is writing a whole essay using AI academically dishonest? We’ve never had to think of that before," said Solaiman.

As for Hick, he says he had to reconsider the way he teaches: "Every time I give an essay assignment, I have to ask myself  'What could ChatGPT do with this?'"

He has since found a solution: an impromptu oral exam if he ever gets a whiff of an assignment that looks like it's been written by AI.