2006
DOI: 10.1119/1.2174053
|View full text |Cite
|
Sign up to set email alerts
|

Testing the test: Item response curves and test quality

Abstract: We present a simple technique for evaluating multiple-choice questions and their answers beyond the usual measures of difficulty and the effectiveness of distractors. The technique involves the construction and qualitative consideration of item response curves and is based on item response theory from the field of education measurement. To demonstrate the technique, we apply item response curve analysis to three questions from the Force Concept Inventory. Item response curve analysis allows us to characterize … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
100
0
2

Year Published

2010
2010
2017
2017

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(103 citation statements)
references
References 13 publications
(6 reference statements)
1
100
0
2
Order By: Relevance
“…For future work, we plan to carry out the qualitative analysis of actual answer choices and patterns as well as quantitative analysis such as item response curves [9,10]. Moreover, we plan to disambiguate our data in order to assess the false-positive ratios by major area and analyze whether they differ among these populations.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For future work, we plan to carry out the qualitative analysis of actual answer choices and patterns as well as quantitative analysis such as item response curves [9,10]. Moreover, we plan to disambiguate our data in order to assess the false-positive ratios by major area and analyze whether they differ among these populations.…”
Section: Discussionmentioning
confidence: 99%
“…Similarly, if we evaluate and unify the systematic error from the validation of the distractor and the context [8][9][10][11], we could derive the systematic error of the FCI, thereby allowing us to compare the results considering these inadequacies without modifying them and to effectively measure the effect of physics education. This conclusion would enable our study to provide a positive and far-reaching effect in the field of physics education research.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This can be done by graphing the proportion of students who choose the correct response versus the total test score. This is a much easier method that will render useful results for a FASI (Ding & Beichner, 2009;Morris et al, 2006). Full on IRT requires specialized statistical software and large sample sizes from 200 (1 parameter Rausch-arguably not appropriate for a FASI) to 1,000 students (Crocker & Algina, 1986).…”
Section: Carry Out Validation Interviews On Test Questionsmentioning
confidence: 99%
“…Apart from the difficulty of the questions and the effectiveness of distractors (wrong possible choices of multiple choice questions, shown for example in table 3 for question Q2) the item response curve (IRC) analysis can be done on the given questions [16,17]. The results of the IRC give a deeper insight into the quality of the questions, as well as into the abilities of the participating students.…”
Section: Resultsmentioning
confidence: 99%