2022
DOI: 10.7717/peerj-cs.1010
|View full text |Cite
|
Sign up to set email alerts
|

Automatic computer science domain multiple-choice questions generation based on informative sentences

Abstract: Students require continuous feedback for effective learning. Multiple choice questions (MCQs) are extensively used among various assessment methods to provide such feedback. However, manual MCQ generation is a tedious task that requires significant effort, time, and domain knowledge. Therefore, a system must be present that can automatically generate MCQs from the given text. The automatic generation of MCQs can be carried out by following three sequential steps: extracting informative sentences from the textu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 55 publications
(86 reference statements)
0
5
0
Order By: Relevance
“…Such evaluations often involved students differentiating AI‐generated from human‐generated content (Merine & Purkayastha, 2022; Song et al, 2022) and assessing student satisfaction with AI‐generated responses (Nguyen et al, 2021). Likewise, two studies have involved experts in evaluating specific features of the content generated by the LLMs‐based innovations, such as informativeness (Maheen et al, 2022) and cognitive level (Moore et al, 2022). Surveys have been used to evaluate students' experience with LLMs‐based innovations from multiple perspectives, such as the quality and difficulty of AI‐generated questions (Drori et al, 2022; Li & Xing, 2021) and potential learning benefits of the systems (Jayaraman & Black, 2022).…”
Section: Resultsmentioning
confidence: 99%
“…Such evaluations often involved students differentiating AI‐generated from human‐generated content (Merine & Purkayastha, 2022; Song et al, 2022) and assessing student satisfaction with AI‐generated responses (Nguyen et al, 2021). Likewise, two studies have involved experts in evaluating specific features of the content generated by the LLMs‐based innovations, such as informativeness (Maheen et al, 2022) and cognitive level (Moore et al, 2022). Surveys have been used to evaluate students' experience with LLMs‐based innovations from multiple perspectives, such as the quality and difficulty of AI‐generated questions (Drori et al, 2022; Li & Xing, 2021) and potential learning benefits of the systems (Jayaraman & Black, 2022).…”
Section: Resultsmentioning
confidence: 99%
“…Such evaluations often involved students differentiating AI-generated from human-generated content [57,33] and assessing student satisfaction with AI-generated responses [37]. Likewise, two studies have involved experts in evaluating specific features of the content generated by the LLMsbased innovations, such as informativeness [31] and cognitive level [36]. Surveys have been used to evaluate students' experience with LLMs-based innovations from multiple perspectives, such as the quality and difficulty of AI-generated questions [17,30] and potential learning benefits of the systems [25].…”
Section: Ethical Challenges -Rq3mentioning
confidence: 99%
“…Unstructured text sources often organize knowledge in the form of articles or paragraphs and are crucial in the field of question answering. In practice, multiple-answer questions play an important role in various assessment methods ( Maheen et al, 2022 ). Open-domain question answering based on multi-paragraph multi-answer reasoning challenges the ability to comprehensively utilize evidence from large-scale corpora.…”
Section: Related Workmentioning
confidence: 99%