2008
DOI: 10.1177/0146621607300047
|View full text |Cite
|
Sign up to set email alerts
|

Consistent Estimation of Rasch Item Parameters and Their Standard Errors Under Complex Sample Designs

Abstract: U.S. state educational testing programs administer tests to track student progress and hold schools accountable for educational outcomes. Methods from item response theory, especially Rasch models, are usually used to equate different forms of a test. The most popular method for estimating Rasch models yields inconsistent estimates and relies on ad hoc adjustments to obtain good approximations. Furthermore, psychometricians have paid little attention to the estimation of effective standard errors for Rasch mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 18 publications
(40 reference statements)
0
4
0
Order By: Relevance
“…Other researchers have included person‐level covariates to improve the estimation of item and person parameters (e.g., Mislevy, 1987; Mislevy & Bock, 1989). Another attempt was made by Cohen, Chan, Jiang, and Seburn (2008) to use a nonparametric method to study Rasch modeling under the complex sample designs typically found in state testing programs. In addition, multilevel IRT models (Johnson & Jenkins, 2005; Li, Oranje, & Jiang, 2009) have been applied in large‐scale survey assessments such as the National Assessment of Educational Progress.…”
Section: Local Person Dependencementioning
confidence: 99%
“…Other researchers have included person‐level covariates to improve the estimation of item and person parameters (e.g., Mislevy, 1987; Mislevy & Bock, 1989). Another attempt was made by Cohen, Chan, Jiang, and Seburn (2008) to use a nonparametric method to study Rasch modeling under the complex sample designs typically found in state testing programs. In addition, multilevel IRT models (Johnson & Jenkins, 2005; Li, Oranje, & Jiang, 2009) have been applied in large‐scale survey assessments such as the National Assessment of Educational Progress.…”
Section: Local Person Dependencementioning
confidence: 99%
“…The normalized weights,ŵ hij,R , are found by dividing 3 iAM is free psychometric software available from the American Institutes for Research (AIR) and Jon Cohen, and can be downloaded at http://am.air.org. The software uses an item response theory module (iAM) for parameter estimation of all major item response theory models, along with design consistent estimates of standard errors (Cohen, Chan, Jiang, & Seburn, 2008). The standard errors are estimated using Taylor series approximations, which take into account student weights, stratification information, and clustering (Binder, 1983;Woodruff, 1971).…”
Section: Sampling Weightsmentioning
confidence: 99%
“…However, the procedures and formula provided by Kolen and Brennan assume simple random sampling so they do not capture the additional error in the standard error of equating caused by design effects in large-scale assessment sampling. One example of research that explicitly incorporates the design effect in the standard error of equating was reported by Cohen et al (2008).…”
Section: Standard Error Of Equatingmentioning
confidence: 99%
“…These important dependencies are commonly ignored, resulting in estimates that may be inconsistent and standard errors that do not adequately characterize the true variance. One option to consider in the presence of a non-zero design effect (Kish 1965) is to regard the point estimates as retaining some utility, but construct robust standard errors in recognition of the fact that correlated observations provide less information than an equivalent number from a simple random sample (Binder 1983;Cohen, Jiang, and Seburn 2005).…”
Section: Introductionmentioning
confidence: 99%