Author(s): Süveges, Kincső (2024)
Abstract:
Automated essay scoring (AES) systems have revolutionized education with bringing numerous immense advantages to assessment. While several learning and assessment platforms have already implemented it, literature on the efficiency of AES systems is limited in the context of primary education. Therefore, this research studied how well and accurately AES systems can evaluate writing products from primary school students. In this exploratory study 100 texts were simulated as being written by Pre-K- Grade 2 students. This was followed by automated scoring in a supervised machine learning setting, based on a validated rubric. Predicted and actual scores were compared to find out how accurately AES can evaluate generated texts. Accuracy of the automated assessment was found to be fairly reliable despite certain limitations, namely the lack of authentic data. Keywords: Automated Essay Scoring, Rubrics, Data Augmentation, Holistic Scoring
Document(s):
SUVEGES_MA_BMS.pdf