Determining whether measures are equally valid for all individuals is a core component of psychometric analysis. Traditionally, the evaluation of measurement invariance (MI) involves comparing independent groups defined by a single categorical covariate (e.g., men and women) to determine if there are any items that display differential item functioning (DIF). More recently, Moderated Nonlinear Factor Analysis (MNLFA) has been advanced as an approach for evaluating MI/DIF simultaneously over multiple background variables, categorical and continuous. Unfortunately, conventional procedures for detecting DIF do not scale well to the more complex MNLFA. The current manuscript therefore proposes a regularization approach to MNLFA estimation that penalizes the likelihood for DIF parameters (i.e., rewarding sparse DIF). This procedure avoids the pitfalls of sequential inference tests, is automated for end users, and is shown to perform well in both a small-scale simulation and an empirical validation study.The foundation of science is measurement, and the development of both reliable and valid measures has long been a critical enterprise for researchers in psychology and allied fields. One key aspect of validity concerns whether the scores produced by a measure, such as ability scores on an achievement test or depression scores from a symptom inventory, are directly comparable across individuals. If the scores over-estimate the latent trait for some people (e.g., men) and under-estimate it for others (e.g., African-Americans), then observed score differences will not accurately reflect true differences in the quantity being measured (Millsap, 2011). Score comparisons between individuals will be invalid and results obtained from using these scores in subsequent analyses will be distorted. Some effects may be masked, others exaggerated, and still others obtained entirely as artefacts of poor measurement (Curran et al., in press).Recognizing the importance of this issue, psychometricians have devoted considerable attention to developing theory and methods for assessing whether scores are equivalent in meaning and metric across individuals, a condition referred to as measurement invariance
Premorbid adjustment varies widely among individuals with schizophrenia, and has been shown to bear significantly on prodrome and onset characteristics, and on cognition, symptoms, and functioning after onset. The current analysis focused on the Premorbid Adjustment Scale (PAS), a retrospective measure assessing social and academic function at several time points from early childhood to illness onset. In an effort to explore discrete developmental subtypes, we applied latent class growth analysis (LCGA) to data from the PAS in our sample of individuals with schizophrenia (N = 208), finding three latent trajectory classes. The first of these classes showed consistently adequate-to-good social and academic functioning prior to onset; the second showed initially good function and deterioration with time until onset; the third showed poor functioning in childhood that deteriorated further during the years up to diagnosis. The classes differed significantly in terms of age of onset, processing speed, and functioning after onset. There were no significant differences in symptomatology. Our findings illustrate a potentially powerful methodological approach to the problem of heterogeneity in schizophrenia research, and add weight to the notion that aspects of premorbid history may be useful for subtyping schizophrenia patients. The potential implications of this subtyping strategy, including those pertaining to potential genetics studies, are discussed.
A challenge facing nearly all studies in the psychological sciences is how to best combine multiple items into a valid and reliable score to be used in subsequent modelling. The most ubiquitous method is to compute a mean of items, but more contemporary approaches use various forms of latent score estimation. Regardless of approach, outside of large-scale testing applications, scoring models rarely include background characteristics to improve score quality. The current paper used a Monte Carlo simulation design to study score quality for different psychometric models that did and did not include covariates across levels of sample size, number of items, and degree of measurement invariance. The inclusion of covariates improved score quality for nearly all design factors, and in no case did the covariates degrade score quality relative to not considering the influences at all. Results suggest that the inclusion of observed covariates can improve factor score estimation.
When generating scores to represent latent constructs, analysts have a choice between applying psychometric approaches that are principled but that can be complicated and time-intensive versus applying simple and fast, but less precise approaches, such as sum or mean scoring. We explain the reasons for preferring modern psychometric approaches: namely, use of unequal item weights and severity parameters, the ability to account for local dependence and differential item functioning, and the use of covariate information to more efficiently estimate factor scores. We describe moderated nonlinear factor analysis (MNLFA), a relatively new, highly flexible approach that allows analysts to develop precise factor score estimates that address limitations of sum score, mean score, and traditional factor analytic approaches to scoring. We then outline the steps involved in using the MNLFA scoring approach and discuss the circumstances in which this approach is preferred. To overcome the difficulty of implementing MNLFA models in practice, we developed an R package, aMNLFA, that automates much of the rule-based scoring process. We illustrate the use of aMNLFA with an empirical example of scoring alcohol involvement in a longitudinal study of 6,998 adolescents and compare performance of MNLFA scores with traditional factor analysis and sum scores based on the same set of 12 items. MNLFA scores retain
Although it is currently best-practice to directly model latent factors whenever feasible, there remain many situations in which this approach is not tractable. Recent advances in covariate-informed factor score estimation can be used to provide manifest scores that are used in second-stage analysis, but these are currently understudied. Here we extend our prior work on factor score recovery to examine the use of factor score estimates as predictors both in the presence and absence of the same covariates that were used in score estimation. Results show that whereas the relation between the factor score estimates and the criterion are typically well recovered, substantial bias and increased variability is evident in the covariate effects themselves. Importantly, using covariate-informed factor score estimates substantially, and often wholly, mitigates these biases. We conclude with implications for future research and recommendations for the use of factor score estimates in practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.