2022
DOI: 10.1037/met0000367
|View full text |Cite
|
Sign up to set email alerts
|

Avoiding bias from sum scores in growth estimates: An examination of IRT-based approaches to scoring longitudinal survey responses.

Abstract: A huge portion of what we know about how humans develop, learn, behave, and interact is based on survey data. Researchers use longitudinal growth modeling to understand the development of students on psychological and social-emotional learning constructs across elementary and middle school. In these designs, students are typically administered a consistent set of self-report survey items across multiple school years, and growth is measured either based on sum scores or scale scores produced based on item respo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
46
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(48 citation statements)
references
References 49 publications
0
46
0
Order By: Relevance
“…Sum or average scores serve as a rough approximation that are sometimes justifiable, but there are noted weaknesses with sum or average scores for reflective constructs, especially with longitudinal data (e.g., Braun & Mislevy, 2005;McNeish & Wolf, 2020). For instance, Kuhfeld and Soland (2020) noted that omitting the measurement model for outcome variables in longitudinal data has adverse effects on the parameter estimates. Similarly, Neale, Lubke, Aggen, and Dolan (2005) showed that sum scores can bias variance components when measurement non-invariance is present.…”
Section: Measurement In Intensive Longitudinal Datamentioning
confidence: 99%
“…Sum or average scores serve as a rough approximation that are sometimes justifiable, but there are noted weaknesses with sum or average scores for reflective constructs, especially with longitudinal data (e.g., Braun & Mislevy, 2005;McNeish & Wolf, 2020). For instance, Kuhfeld and Soland (2020) noted that omitting the measurement model for outcome variables in longitudinal data has adverse effects on the parameter estimates. Similarly, Neale, Lubke, Aggen, and Dolan (2005) showed that sum scores can bias variance components when measurement non-invariance is present.…”
Section: Measurement In Intensive Longitudinal Datamentioning
confidence: 99%
“…There are multiple possible limitations in using just a single timepoint to calibrate scores for pre-/postinterventions (Bauer & Curran, 2016;Gorter et al, 2015;Kuhfeld & Soland, 2020). First, one is assuming that the items function similarly before and after the intervention.…”
Section: Approaches For Calibrating and Scoring Multigroup Multi-time...mentioning
confidence: 99%
“…While questions of IRT model type and scoring approach have been examined in depth for scores at a single point in time (Kolen & Tong, 2010;Maydeu-Olivares et al, 1994), choices around the calibration sample and IRT model used for RCTs are less understood. We next describe possible calibration samples and models, including several discussed in Bauer and Curran (2016) and Kuhfeld and Soland (2020).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…One pertinent area where these cutoffs are tenuously overgeneralized is one-factor models. Unidimensionality is often desirable in scale development and the psychometric literature has recently seen an uptick in in one-factor models for assessing psychometric properties of scales or as alternatives to scoring scales with sum or item-average scores (e.g., Edwards & Wirth, 2009;Fried et al, 2016;Fried & Nesse, 2015;Kuhfeld & Soland, 2020;McNeish & Wolf, 2020;Shi et al, 2019;Slof-Op't Landt et al, 2009). The disconnect lies in that the cutoffs from the Hu and Bentler (1999) simulation were based on sensitivity to omitted cross-loadings and omitted factor covariances in three-factor models.…”
Section: Dynamic Fit Index Cutoffs For One-factor Modelsmentioning
confidence: 99%