New study unveils major differences between AI and human-written essays

These days, artificial intelligence is everywhere. From drafting emails and writing essays to creating artwork and even coding, AI engines like ChatGPT are becoming commonplace. 

As AI continues to permeate various aspects of our work and personal lives, understanding its impact and distinguishing between human and AI-generated content has become increasingly important. 

Preply, a language learning platform, has conducted a study to understand the impact of AI on essay writing. The study analysed over 12,000 essays to identify common words, phrases, and errors in both human-written and AI-generated content. 

Experts from the University of Edinburgh and the School of Informatics were consulted to understand how AI affects academia and whether it is possible to detect AI writing in student work.

The study involved analysing 12,346 student-written essays sourced from IvyPanda, covering 25 subjects. Essays included in the study had a minimum of 600 words. For comparison, ChatGPT was tasked with writing 115 essays on similar subjects. LanguageTool was used to detect common errors in the student-written essays.

Preply’s analysis found that the most frequent words in human-written essays include ‘people’, ‘also’, and ‘one’, while AI-generated essays often feature terms such as ‘social’, ‘cultural’, and ‘individuals’. 

The study also highlighted that human writers tend to use common nouns and verbs, whereas AI-generated content favours more complex and specialised vocabulary. Student-written essays were also found to have more errors, with 78% containing at least one mistake compared to only 13% of AI-generated essays.

The research also noted a significant difference in vocabulary diversity. Human-written essays used over 11,000 unique words, while AI-generated essays used around 7,000 unique words, suggesting that students employ a broader range of vocabulary.

Dr. Vaishak Bell from the School of Informatics and Professor Judy Robertson from the University of Edinburgh provided their perspectives on the challenges of detecting AI in student work when asked, would you be able to detect AI writing or not?

 Dr. Bell noted:

"I would say it depends on the context. In some cases, no. So, for instance, certainly, in scholarly writing, you often speak in a passive voice and there are no adjectives, you don’t really embellish statements. You are trying to remain factual. I have seen re-phrasing by ChatGPT writing it, and the difference is very marginal.

I don’t think we can really beat it purely on syntactic structure. It would have to be some semantic analysis of the topic, right? So, it’s actually pretty hard. I don’t think it’s easy to do."

Professor Robertson emphasised the importance of knowing students personally to identify AI-written content, stating:

"Even experienced teachers can’t tell if you just give them a database of essays and they’ve never met the authors. They can’t distinguish between human-written essays and AI-written essays. But because mine is a small class and I know the students, I think I’d have a higher chance of doing that – because if you know somebody and the way they speak and what they’ve been working on, then it is kind of easier to tell.

I think that human connection is what will save us from AI plagiarism. It’s teachers knowing the people that they work with."

Previous
Previous

New LINQ Connect app streamlines school payments and nutrition tracking

Next
Next

Garrett Thompson appointed eighth President of Logan University