The assessment of students' performances is one of the essential components of teaching activities, and it poses different challenges to teachers and instructors, especially when considering the grading of responses to open-ended questions (i.e., short-answers or essays). Open-ended tasks allow a more in-depth assessment of students' learning levels, but their evaluation and grading are time-consuming and prone to subjective bias. For these reasons, automatic grading techniques have been studied for a long time, focusing mainly on short-answers rather than long essays. Given the growing popularity of Massive Online Open Courses and the shifting from physical to virtual classrooms environments due to the Covid-19 pandemic, the adoption of questionnaires for evaluating learning performances has rapidly increased. Hence, it is of particular interest to analyze the recent effort of researchers in the development of techniques designed to grade students' responses to open-ended questions. In our work, we consider a systematic literature review focusing on automatic grading of open-ended written assignments. The study encompasses 488 articles published from 1984 to 2021 and aims at understanding the research trends and the techniques to tackle essay automatic grading. Lastly, inferences and recommendations are given for future works in the Learning Analytics field.
Framing automatic grading techniques for open-ended questionnaires responses. A short survey
Gabriella Casalino;
2021-01-01
Abstract
The assessment of students' performances is one of the essential components of teaching activities, and it poses different challenges to teachers and instructors, especially when considering the grading of responses to open-ended questions (i.e., short-answers or essays). Open-ended tasks allow a more in-depth assessment of students' learning levels, but their evaluation and grading are time-consuming and prone to subjective bias. For these reasons, automatic grading techniques have been studied for a long time, focusing mainly on short-answers rather than long essays. Given the growing popularity of Massive Online Open Courses and the shifting from physical to virtual classrooms environments due to the Covid-19 pandemic, the adoption of questionnaires for evaluating learning performances has rapidly increased. Hence, it is of particular interest to analyze the recent effort of researchers in the development of techniques designed to grade students' responses to open-ended questions. In our work, we consider a systematic literature review focusing on automatic grading of open-ended written assignments. The study encompasses 488 articles published from 1984 to 2021 and aims at understanding the research trends and the techniques to tackle essay automatic grading. Lastly, inferences and recommendations are given for future works in the Learning Analytics field.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.