2012
DOI: 10.1111/j.1365-2923.2012.04289.x
|View full text |Cite
|
Sign up to set email alerts
|

Using automatic item generation to create multiple-choice test items

Abstract: Medical Education 2012: 46: 757–765 Context  Many tests of medical knowledge, from the undergraduate level to the level of certification and licensure, contain multiple‐choice items. Although these are efficient in measuring examinees’ knowledge and skills across diverse content areas, multiple‐choice items are time‐consuming and expensive to create. Changes in student assessment brought about by new forms of computer‐based testing have created the demand for large numbers of multiple‐choice items. Our current… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0
1

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 93 publications
(73 citation statements)
references
References 11 publications
0
72
0
1
Order By: Relevance
“…The outcomes from the second and third examples can be obtained from the first author. All the generated items were created by medical SMEs using the three-step AIG process that included developing a cognitive model, creating an item model, and generating items (Gierl, Lai, &Turner, 2012). The items in each dataset were generated based on individual cognitive models and the derived item models.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The outcomes from the second and third examples can be obtained from the first author. All the generated items were created by medical SMEs using the three-step AIG process that included developing a cognitive model, creating an item model, and generating items (Gierl, Lai, &Turner, 2012). The items in each dataset were generated based on individual cognitive models and the derived item models.…”
Section: Resultsmentioning
confidence: 99%
“…The first step in the AIG process is to develop cognitive models which highlight both the examinees' knowledge and skills required to solve the item as well as specify the content features in the items. To create the cognitive models, the SMEs are asked to identify and describe the key information that would be used to solve a test item (Gierl, Lai, & Turner, 2012). This cognitive model is used to guide the detailed rendering needed for item generation.…”
Section: Automatic Item Generation (Aig)mentioning
confidence: 99%
“…Current literature involves the creation of complex algorithms or computer software, with low‐validity question banks or workshops that are either descriptive or too technical for peer teachers to reproduce. By having a structured format to question writing and question selection, we tried to negate the common criticism of near‐peer teaching and peer‐assisted teaching: i.e.…”
Section: Discussionmentioning
confidence: 99%
“…The instructor garnered great educational satisfaction and enjoyment from the session, and was saved from creating new MCQs from scratch. Previous reports describe creating questions using computers (Gierl et al 2012), innovative workshops (Droegemueller et al 2005), and questions from other schools (Freeman et al 2010) to augment an institution's question bank. The curriculum also gained an influx of multiple-choice questions that have undergone multiple steps of review.…”
Section: What To Do Nextmentioning
confidence: 99%