The assessment of students' performances is one of the essential components of teaching activities, and it poses different challenges to teachers and instructors, especially when considering the grading of responses to open-ended questions (i.e., short-answers or essays). Open-ended tasks allow a more in-depth assessment of students' learning levels, but their evaluation and grading are time-consuming and prone to subjective bias. For these reasons, automatic grading techniques have been studied for a long time, focusing mainly on short-answers rather than long essays. Given the growing popularity of Massive Online Open Courses and the shifting from physical to virtual classrooms environments due to the Covid-19 pandemic, the adoption of questionnaires for evaluating learning performances has rapidly increased. Hence, it is of particular interest to analyze the recent effort of researchers in the development of techniques designed to grade students' responses to open-ended questions. In our work, we consider a systematic literature review focusing on automatic grading of open-ended written assignments. The study encompasses 488 articles published from 1984 to 2021 and aims at understanding the research trends and the techniques to tackle essay automatic grading. Lastly, inferences and recommendations are given for future works in the Learning Analytics field.

Casalino G., Cafarelli B., del Gobbo E., Fontanella L., Grilli L., Guarino A., et al. (2021). Framing automatic grading techniques for open-ended questionnaires responses. A short survey. In P. Limone, R. Di Fuccio (a cura di), CEUR Workshop Proceedings. CEUR-WS.

Framing automatic grading techniques for open-ended questionnaires responses. A short survey

Schicchi D.;Taibi D.
2021-01-01

Abstract

The assessment of students' performances is one of the essential components of teaching activities, and it poses different challenges to teachers and instructors, especially when considering the grading of responses to open-ended questions (i.e., short-answers or essays). Open-ended tasks allow a more in-depth assessment of students' learning levels, but their evaluation and grading are time-consuming and prone to subjective bias. For these reasons, automatic grading techniques have been studied for a long time, focusing mainly on short-answers rather than long essays. Given the growing popularity of Massive Online Open Courses and the shifting from physical to virtual classrooms environments due to the Covid-19 pandemic, the adoption of questionnaires for evaluating learning performances has rapidly increased. Hence, it is of particular interest to analyze the recent effort of researchers in the development of techniques designed to grade students' responses to open-ended questions. In our work, we consider a systematic literature review focusing on automatic grading of open-ended written assignments. The study encompasses 488 articles published from 1984 to 2021 and aims at understanding the research trends and the techniques to tackle essay automatic grading. Lastly, inferences and recommendations are given for future works in the Learning Analytics field.
2021
Casalino G., Cafarelli B., del Gobbo E., Fontanella L., Grilli L., Guarino A., et al. (2021). Framing automatic grading techniques for open-ended questionnaires responses. A short survey. In P. Limone, R. Di Fuccio (a cura di), CEUR Workshop Proceedings. CEUR-WS.
File in questo prodotto:
File Dimensione Formato  
paper5.pdf

accesso aperto

Tipologia: Versione Editoriale
Dimensione 1.19 MB
Formato Adobe PDF
1.19 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/548092
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact