2019
DOI: 10.3102/1076998619890589
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Pairwise Comparison for Educational Measurement

Abstract: Pairwise comparison is becoming increasingly popular as a holistic measurement method in education. Unfortunately, many comparisons are required for reliable measurement. To reduce the number of required comparisons, we developed an adaptive selection algorithm (ASA) that selects the most informative comparisons while taking the uncertainty of the object parameters into account. The results of the simulation study showed that, given the number of comparisons, the ASA resulted in smaller standard errors of obje… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
43
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(45 citation statements)
references
References 22 publications
2
43
0
Order By: Relevance
“…Previous studies provide varying recommendations for the number of comparisons per object required for reliable measurement. The recommendations range from = 9 (Pollitt, 2012), to = 12 (Verhavert et al, 2019, to = 20 based on average reliability and = 30 based on the lower bound of a 68% confidence interval for reliability equal to .80 (Crompvoets et al, 2020 Fourth, we varied the number of raters, using two, three, and five raters. We assumed that all raters performed an equal number of comparisons.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Previous studies provide varying recommendations for the number of comparisons per object required for reliable measurement. The recommendations range from = 9 (Pollitt, 2012), to = 12 (Verhavert et al, 2019, to = 20 based on average reliability and = 30 based on the lower bound of a 68% confidence interval for reliability equal to .80 (Crompvoets et al, 2020 Fourth, we varied the number of raters, using two, three, and five raters. We assumed that all raters performed an equal number of comparisons.…”
Section: Methodsmentioning
confidence: 99%
“…Fifth, we varied rater agreement as either perfectly model compliant, where the comparisons of all raters are based on the preference probabilities in the generating model, or imperfectly model compliant, where one rater does not comply perfectly with the generating model. Although perfect model compliance for all raters is unlikely, we included this condition because it facilitates comparison with previous research (Crompvoets et al, 2020). We simulated less than perfect model compliance by adding a noise component to the true values of the latent variable for one of the raters:…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations