2018
DOI: 10.1111/jedm.12176
|View full text |Cite
|
Sign up to set email alerts
|

Controlling Bias in Both Constructed Response and Multiple‐Choice Items When Analyzed With the Dichotomous Rasch Model

Abstract: Even though guessing biases difficulty estimates as a function of item difficulty in the dichotomous Rasch model, assessment programs with tests which include multiplechoice items often construct scales using this model. Research has shown that when all items are multiple-choice, this bias can largely be eliminated. However, many assessments have a combination of multiple-choice and constructed response items. Using vertically scaled numeracy assessments from a large-scale assessment program, this article show… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 16 publications
(22 reference statements)
0
3
0
Order By: Relevance
“…These findings come in agreement to earlier claims by Bachman and Palmer (1996: 202) who indicated that: "an item would be significantly more difficult if the options were closer in meaning because that would make identifying the correct answer more demanding for the test-taker". In support, Andrich & Marais (2018), declared that the more difficult the item, the greater the degree of guessing and that persons with greater proficiency tended to correctly answer the more difficult items at a greater rate than the less proficient.…”
Section: Discussionmentioning
confidence: 99%
“…These findings come in agreement to earlier claims by Bachman and Palmer (1996: 202) who indicated that: "an item would be significantly more difficult if the options were closer in meaning because that would make identifying the correct answer more demanding for the test-taker". In support, Andrich & Marais (2018), declared that the more difficult the item, the greater the degree of guessing and that persons with greater proficiency tended to correctly answer the more difficult items at a greater rate than the less proficient.…”
Section: Discussionmentioning
confidence: 99%
“…Distractor response options are an important part of test development and the quality of distractors selected determines the difficulty and discrimination of items (Andrich and Styles 2011;Rodriguez 2005). We chose to select distractors at the item level based on the confusions of emotion labels from Study 1.…”
Section: Selection Of Distractor Itemsmentioning
confidence: 99%
“…Table 1 summarizes the fixed and manipulated characteristics of the simulated data. In all conditions, I generated ratings using a five-category rating scale (x = 0, 1, 2, 3, 4) because this scale length is popular in applied and methodological psychometric research related to both affective scales (e.g., Bozdag˘& Bilge, 2022;Haddad et al, 2021;Hagedoorn et al, 2018;Moors, 2008;Waugh, 2002) and performance assessments that include ordinal rating scales (e.g., Andrich, 2010;Wind & Walker, 2019). I generated person parameters (u) and overall item locations (d i ) from a standard normal distribution to reflect recent simulation and real data IRT research related to rating scales (Buchholz & Hartig, 2019;Finch, 2011; Wind & Guo, 2019;Wolfe et al, 2014).…”
Section: Simulation Studymentioning
confidence: 99%