2020
DOI: 10.3102/1076998620956668
|View full text |Cite
|
Sign up to set email alerts
|

Commentary on “Validation Methods for Aggregate-Level Test Scale Linking: A Case Study Mapping School District Test Score Distributions to a Common Scale”

Abstract: In this commentary, I share my perspective on the goals of assessments in general, on linking assessments that were developed according to different specifications and for different purposes, and I propose several considerations for the authors and the readers. This brief commentary is structured around three perspectives (1) the context of this research, (2) the methodology proposed here, and (3) the consequences for applied research.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…An important follow‐up question when considering whether a study's results are generalizable beyond the linking study is how problematic it would be to apply study‐specific linking results to more general situations? This question has been considered in commentaries of a recent attempt to link several state K–12 assessments to the NAEP scale (Bolt, 2021; Davison, 2021; Moses & Dorans, 2021; von Davier, 2021). The linking under consideration was one where school district means on state tests were linked to the NAEP scale, using statistical procedures to infer the district means on the state tests from published distributions of passing rates (Reardon et al., 2021).…”
Section: Three Proposed Types Of Test Linkings and Comparability Rest...mentioning
confidence: 99%
“…An important follow‐up question when considering whether a study's results are generalizable beyond the linking study is how problematic it would be to apply study‐specific linking results to more general situations? This question has been considered in commentaries of a recent attempt to link several state K–12 assessments to the NAEP scale (Bolt, 2021; Davison, 2021; Moses & Dorans, 2021; von Davier, 2021). The linking under consideration was one where school district means on state tests were linked to the NAEP scale, using statistical procedures to infer the district means on the state tests from published distributions of passing rates (Reardon et al., 2021).…”
Section: Three Proposed Types Of Test Linkings and Comparability Rest...mentioning
confidence: 99%
“…Moses and Dorans describe our approach as “indirect validation,” but we would describe our results in tables 1 and 3 as direct validation. Both von Davier (2021) and Moses and Dorans (2021) suggest we explore subgroup analyses. It was not clear to us whether they meant something besides the subgroup analyses we had presented at the bottom of table 1 in our original paper.…”
Section: Responses To Additional Points From Reviewersmentioning
confidence: 99%
“…We dedicate this issue to the paper, those commentaries, and the authors’ response. The commentaries include important points for vetting the SEDA data set: (1) Bolt (2021) outlines research directions for exploring sources of district-level bias and discusses the use of multilevel models to disentangle the role of within-state and between-state components of the aggregate linking; (2) Davison (2021) reviews linking processes and discusses the importance of assessing the absence of bias in estimates of district means and ensuring the validity of scores across the performance continuum; (3) Moses and Dorans (2021) critique the validation effort by discussing recommendations for linking state assessments to NAEP in addition to providing an empirical replication; and (4) von Davier (2021) provides readers with a fireside chat on linking methodology that covers the importance of considering the consequences of using data that is formed with tests that consist of different content, constructs, and response processes. We hope this issue sparks continued vetting of the SEDA database and the broader conversation about how to make the most of available educational data.…”
mentioning
confidence: 99%