A study by researchers at the University of Reading shows that AI-generated exam submissions managed to deceive university markers, raising concerns about the integrity of educational assessments and the challenge of detecting AI-generated work.

Researchers at the University of Reading conducted a study where AI-generated exam submissions were used to test the detection capabilities of university markers. The experiment involved creating 33 fake student identities and submitting unedited answers generated by ChatGPT-4 for take-home online assessments in undergraduate psychology courses. The exam markers, unaware of the study, flagged only one AI-generated submission, while the others received higher than average grades compared to real student submissions.

Conducted by Dr. Peter Scarfe and other researchers at Reading’s School of Psychology and Clinical Language Sciences, the study claims that AI has now passed the Turing test in this context. The results raise significant concerns about the integrity of educational assessments and the growing challenge of detecting AI-generated academic work.

The study emphasizes the need for the global education sector to adapt its assessment methods in response to advancements in AI. Prof. Etienne Roesch, a co-author, suggests developing guidelines on how students can ethically use AI in their work to prevent a crisis of trust. The University of Reading plans to move away from take-home online exams and incorporate real-life, workplace-related assessments and ethical AI usage in their curriculum.

Share.

Ivan Massow Senior Editor at AI WEEK, Ivan, a life long entrepreneur, has worked at Cambridge University's Judge Business School and the Whittle Lab, nurturing talent and transforming innovative technologies into successful ventures.

Leave A Reply

Exit mobile version