2016
DOI: 10.1177/0013164416633735
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Spurious Latent Classes in Growth Mixture Modeling With Nonnormal Errors

Abstract: Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This can lead to the overextraction of latent classes and the attribution of substantive meaning to these spurious classes. The goals of this study are (1) to explore the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
24
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(29 citation statements)
references
References 37 publications
5
24
0
Order By: Relevance
“…First, it is important to keep in mind that a key limitation of mixture models (including LPA) is the reliance on the assumption that all extracted profiles follow multivariate normal distributions (McLachlan & Peel, 2000). Violation of this assumption, which is impossible to test in practice, could possibly result in the extraction of spurious latent profiles (Bauer & Curran, 2003;Guerra-Peña, & Steinley, 2016;Sen, Cohen, & Kim, 2016). As new methods emerge to test this assumption or to estimate mixture models without relying on this assumption (e.g., , emerging person-centered evidence will likely need to be reassessed.…”
Section: Methodological Considerations and Directions For Future Resementioning
confidence: 99%
“…First, it is important to keep in mind that a key limitation of mixture models (including LPA) is the reliance on the assumption that all extracted profiles follow multivariate normal distributions (McLachlan & Peel, 2000). Violation of this assumption, which is impossible to test in practice, could possibly result in the extraction of spurious latent profiles (Bauer & Curran, 2003;Guerra-Peña, & Steinley, 2016;Sen, Cohen, & Kim, 2016). As new methods emerge to test this assumption or to estimate mixture models without relying on this assumption (e.g., , emerging person-centered evidence will likely need to be reassessed.…”
Section: Methodological Considerations and Directions For Future Resementioning
confidence: 99%
“…The authors considered skewness and kurtosis values of 1 on the repeated measures and found that BLRT performed best among likelihood ratio tests and fit indices, except for the BIC and SBIC. Guerra-Peña and Steinley [14] showed that the BLRT performs better than other LRT for both normal and nonnormal repeated measures. Nevertheless, even in normal conditions, type I error rates where 5% or higher, becoming worse as sample size increases (e.g., N = 800) and the ratio of kurtosis to skewness becomes larger (e.g., skewness of 0 and kurtosis of 4).…”
Section: Classification Problems Of Normal Gmmmentioning
confidence: 99%
“…Concerning the use of fit indices in the selection of the number of latent classes, the Akaike information criterion (AIC) overestimates the number of latent components even when the repeated measures are normally distributed [26][27][28], and when the data are nonnormal [10][11][12]14]. The Bayesian information criterion (BIC) underestimates the number of groups compared to the "true" model when sample size is small [27,29], overestimates the number of latent classes with nonnormal data and large sample sizes [14], and both the BIC and the sample corrected BIC (SBIC) overestimate the number of latent classes when the model has been misspecified [12] or the data are nonnormal [10][11][12]14]. Moreover, fit indices can distinguish between classes with different trajectories only if they are well separated and as sample size increases [29].…”
Section: Classification Problems Of Normal Gmmmentioning
confidence: 99%
See 2 more Smart Citations