The exam system of a prominent UK university failed to detect nearly all AI-generated submissions in a recent test. 

Conducted at the University of Reading's School of Psychology and Clinical Language Sciences, the experiment highlighted the vulnerabilities in the institution's methods of identifying non-human generated work.

Researchers used the AI chatbot GPT-4 to generate exam answers for 33 fictitious students. 

The AI-crafted answers were then submitted for grading without the examiners' knowledge of the study.

The findings were startling: 94 per cent of the AI-generated submissions went undetected. 

Even more concerning was that these AI-generated answers typically received higher grades than those written by real students. 

Specifically, in 83.4 per cent of cases, the AI submissions outperformed the genuine ones. 

This raises significant questions about the efficacy of current exam monitoring systems and the potential for AI tools to be misused by students seeking to gain an unfair advantage.

The rise of AI tools such as ChatGPT has led to growing concerns about academic dishonesty. 

This issue has been exacerbated by the shift from supervised, in-person exams to unsupervised take-home exams during the COVID-19 pandemic, a model many institutions continue to use. 

Despite efforts to develop detection tools, identifying AI-generated text remains a challenge.

The researchers suggest that a return to supervised, in-person examinations could mitigate some risks. However, as AI continues to integrate into professional environments, they emphasise the need for universities to adapt and incorporate these technologies constructively into educational frameworks.

“A rigorous blind test of a real-life university examinations system shows that exam submissions generated by artificial intelligence were virtually undetectable and robustly gained higher grades than real students,” the authors say. 

“The results of the ‘Examinations Turing Test’ invite the global education sector to accept a new normal and this is exactly what we are doing at the University of Reading. 

“New policies and advice to our staff and students acknowledge both the risks and the opportunities afforded by tools that employ artificial intelligence.”

The full study is accessible here.