2008
DOI: 10.1111/j.1745-3984.2007.00057.x
|View full text |Cite
|
Sign up to set email alerts
|

Comparing the Difficulty of Examination Subjects with Item Response Theory

Abstract: Methods are presented for comparing grades obtained in a situation where students can choose between different subjects. It must be expected that the comparison between the grades is complicated by the interaction between the students' pattern and level of proficiency on one hand, and the choice of the subjects on the other hand. Three methods based on item response theory (IRT) for the estimation of proficiency measures that are comparable over students and subjects are discussed: a method based on a model wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(19 citation statements)
references
References 32 publications
(31 reference statements)
0
19
0
Order By: Relevance
“…Taking into account that school grades do not constitute a validated test, a deeper analysis of the fit of the courses has been conducted, based on the inter-subject comparability approach (Tasmanian Qualification Authority, 2006 , 2007 ; Coe, 2008 ; Korobko et al, 2008 ). Table 1 shows the courses analyzed, the indices of fit, and the item-scale correlation.…”
Section: Resultsmentioning
confidence: 99%
“…Taking into account that school grades do not constitute a validated test, a deeper analysis of the fit of the courses has been conducted, based on the inter-subject comparability approach (Tasmanian Qualification Authority, 2006 , 2007 ; Coe, 2008 ; Korobko et al, 2008 ). Table 1 shows the courses analyzed, the indices of fit, and the item-scale correlation.…”
Section: Resultsmentioning
confidence: 99%
“…Scores of subjects of each grade presented a high reliability, with Cronbach's alpha values of .93 for first‐grade participants, and .94 for the second‐grade participants. In the present study, all subjects were compulsory for students; thus, it was not possible for choice of examination to affect the measurement of the latent construct (Korobko, Glas, Bosker, & Luyten, ).…”
Section: Methodsmentioning
confidence: 95%
“…In recent years such models have been studied most actively in the context of item response theory modelling of data from educational and psychological testing, where the items are categorical (often binary) responses to test questions, the substantive latent variables η describe respondents' abilities or psychological characteristics and non-ignorable non-response may arise when respondents omit questions by choice or because of running out of time, for reasons which may be related to η (Mislevy and Wu, 1996;Mislevy, 2016). Developments, applications and evaluations of latent response propensity models in this field include Glas and Pimentel (2008), Korobko et al (2008), Bertoli-Barsotti and Punzo (2013), Pohl et al (2014) and Köhler et al (2015a,b). Of particular relevance for the cross-national focus of our paper is Rose et al (2010), who described different approaches to modelling cross-national data from educational tests with non-response, and carried out a multigroup analysis with latent response propensities for data from the Programme for International Student Assesssment in 30 countries.…”
Section: Previous Literature On Latent Response Propensity Modelsmentioning
confidence: 99%