2022
DOI: 10.21449/ijate.1124382
|View full text |Cite
|
Sign up to set email alerts
|

Automatic story and item generation for reading comprehension assessments with transformers

Abstract: Reading comprehension is one of the essential skills for students as they make a transition from learning to read to reading to learn. Over the last decade, the increased use of digital learning materials for promoting literacy skills (e.g., oral fluency and reading comprehension) in K-12 classrooms has been a boon for teachers. However, instant access to reading materials, as well as relevant assessment tools for evaluating students’ comprehension skills, remains to be a problem. Teachers must spend many hour… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 40 publications
(39 reference statements)
0
5
0
Order By: Relevance
“…Although educational research that leverages LLMs to develop technological innovations for automating educational tasks is yet to achieve its full potential (ie, most works have focused on improving model performances (Kurdi et al, 2020; Ramesh & Sanampudi, 2022)), a growing body of literature hints at how different stakeholders could potentially benefit from such innovations. Specifically, these innovations could potentially play a vital role in addressing teachers' high levels of stress and burnout by reducing their heavy workloads by automating punctual, time‐consuming tasks (Carroll et al, 2022) such as question generation (Bulut & Yildirim‐Erbasli, 2022; Kurdi et al, 2020; Oleny, 2023), feedback provision (Cavalcanti et al, 2021; Nye et al, 2023), scoring essays (Ramesh & Sanampudi, 2022) and short answers (Zeng et al, 2023). These innovations could also potentially benefit both students and institutions by improving the efficiency of often tedious administrative processes such as learning resource recommendation, course recommendation and student feedback evaluation, potentially (Sridhar et al, 2023; Wollny et al, 2021; Zawacki‐Richter et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Although educational research that leverages LLMs to develop technological innovations for automating educational tasks is yet to achieve its full potential (ie, most works have focused on improving model performances (Kurdi et al, 2020; Ramesh & Sanampudi, 2022)), a growing body of literature hints at how different stakeholders could potentially benefit from such innovations. Specifically, these innovations could potentially play a vital role in addressing teachers' high levels of stress and burnout by reducing their heavy workloads by automating punctual, time‐consuming tasks (Carroll et al, 2022) such as question generation (Bulut & Yildirim‐Erbasli, 2022; Kurdi et al, 2020; Oleny, 2023), feedback provision (Cavalcanti et al, 2021; Nye et al, 2023), scoring essays (Ramesh & Sanampudi, 2022) and short answers (Zeng et al, 2023). These innovations could also potentially benefit both students and institutions by improving the efficiency of often tedious administrative processes such as learning resource recommendation, course recommendation and student feedback evaluation, potentially (Sridhar et al, 2023; Wollny et al, 2021; Zawacki‐Richter et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Technological advancements such as e-learning platforms and computer-based assessments have ushered in unprecedented learning opportunities for students, transforming traditional educational practices and assessments. This transformation necessitates a substantial demand for high-quality assessment items, which are vital for supporting student learning and effectively evaluating educational outcomes (Bulut & Yildirim-Erbasli, 2022;Mazzullo et al, 2023). Therefore, AIG has been proposed and gradually developed by measurement researchers as a solution to reduce the cost of developing a large number of assessment items (Alves et al, 2010;Gierl & Lai, 2015;Gierl et al, 2021;Lai et al, 2009).…”
Section: Discussionmentioning
confidence: 99%
“…LLMs are typically developed for general language processing purposes and often require Tsai et al, 2021), LLM-generated texts (Bulut & Yildirim-Erbasli, 2022), teacher-created questions (Matsumori et al, 2023), stories and fairy tales (e.g., Ghanem et al 2022), knowledge maps (Aigo et al, 2021), slides (Chughtai et al, 2022), textbooks (e.g., Steuer et al 2020), and other course materials (e.g., Gopal, 2022).…”
Section: Data Sourcementioning
confidence: 99%
“…The questions generated by the system were manually evaluated in terms of their usefulness with positive results. Moreover, Bulut et al (2022) used GPT-2 for reading passage generation in a reading comprehension assessment solution. Note that the authors pointed out that further research is still needed to adapt the outcome of the AI-based model for students.…”
Section: Related Workmentioning
confidence: 99%