2016
DOI: 10.1186/s12909-016-0578-4
|View full text |Cite
|
Sign up to set email alerts
|

Long-menu questions in computer-based assessments: a retrospective observational study

Abstract: BackgroundComputer based assessments of paediatrics in our institution use series of clinical cases, where information is progressively delivered to the students in a sequential order. Three types of formats are mainly used: Type A (single answer), Pick N, and Long-menu. Long-menu questions require a long, hidden list of possible answers: based on the student’s initial free text response, the program narrows the list, allowing the student to select the answer. This study analyses the psychometric properties of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0
3

Year Published

2017
2017
2022
2022

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 13 publications
(13 reference statements)
0
6
0
3
Order By: Relevance
“…1). With a type I error rate of 5% and a type II error rate of 20%, interim analyses after 36, 56, 88 and 112 observations (these numbers were imposed by the organisation of the semestrial exam calendar), using Pocock’s stopping rules [12], would allow us to detect a difference of 0.077 in the point biserial correlation between the Type A and long-menu formats, a difference similar to the one estimated by the retrospective study [10]. In other words, among similar groups of students, within similar exams, for the same question stem, we would expect the discrimination of the long-menu answer format to be 0.077 higher than the discrimination of the type A answer format.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…1). With a type I error rate of 5% and a type II error rate of 20%, interim analyses after 36, 56, 88 and 112 observations (these numbers were imposed by the organisation of the semestrial exam calendar), using Pocock’s stopping rules [12], would allow us to detect a difference of 0.077 in the point biserial correlation between the Type A and long-menu formats, a difference similar to the one estimated by the retrospective study [10]. In other words, among similar groups of students, within similar exams, for the same question stem, we would expect the discrimination of the long-menu answer format to be 0.077 higher than the discrimination of the type A answer format.…”
Section: Methodsmentioning
confidence: 99%
“…In a retrospective study assessing the psychometric performance of 553 items used in 13 computer-based paediatrics exams [10], we found that long-menu questions were easier than the classic single-answer format with five options (difficulty of 81.6% versus 75.7%; p = .005) and more discriminating (0.304 versus 0.222; p < .001). However, the retrospective observational design was a limitation to this study: since different questions were used in different formats, the contents and underlying learning objectives were likely to have had an impact on both difficulty and discrimination.…”
Section: Introductionmentioning
confidence: 99%
“…For this reason open-ended questions were not suitable. To avoid random guessing and to keep guessing to a minimum, it was decided to use long-menu questions for the assignments (Cerutti et al, 2016).…”
Section: Case Materialsmentioning
confidence: 99%
“…Students are asked to perform step-by-step resolutions of several clinical cases presenting with different common pediatric complaints. Supplemental patient information, given sequentially, allows them to move towards case resolution [24]. The CBWE does not use adaptive computer testing; all students in a particular session face similar questions.…”
Section: Computer-based Written Examination (Cbwe)mentioning
confidence: 99%