2021
DOI: 10.3389/feduc.2021.613645
|View full text |Cite
|
Sign up to set email alerts
|

Model Fit and Comparison in Finite Mixture Models: A Review and a Novel Approach

Abstract: One of the greatest challenges in the application of finite mixture models is model comparison. A variety of statistical fit indices exist, including information criteria, approximate likelihood ratio tests, and resampling techniques; however, none of these indices describe the amount of improvement in model fit when a latent class is added to the model. We review these model fit statistics and propose a novel approach, the likelihood increment percentage per parameter (LIPpp), targeting the relative improveme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 35 publications
0
13
0
Order By: Relevance
“…There was an elbow in the plots of all statistics, showing that the increase in fit for each profile added diminished after four profiles. The LIPpp indicated a similar trend, where adding a fifth profile only increased the fit with slightly more than 0.1 percent per parameter, close to what is considered a small increase (i.e., LIPpp > 0.1, Grimm et al, 2021). The BLRT-tests were significant for all models, so it did not provide any useful information for class enumeration.…”
Section: Both Diagonal Variance-covariance Specifications (A and B)mentioning
confidence: 61%
See 1 more Smart Citation
“…There was an elbow in the plots of all statistics, showing that the increase in fit for each profile added diminished after four profiles. The LIPpp indicated a similar trend, where adding a fifth profile only increased the fit with slightly more than 0.1 percent per parameter, close to what is considered a small increase (i.e., LIPpp > 0.1, Grimm et al, 2021). The BLRT-tests were significant for all models, so it did not provide any useful information for class enumeration.…”
Section: Both Diagonal Variance-covariance Specifications (A and B)mentioning
confidence: 61%
“…For variance-covariance specification C, the LMR-LRT was significant at α 0.05 for all profile solutions, but it was insignificant at α 0.01 for the 5-profile model. Although not as high as the traditional cut-off for significance (i.e., α 0.05), our relatively large sample may lead to significant loglikelihood tests, even when the difference between models is insubstantial (Grimm et al, 2021;Johnson, 2021). We therefore considered this as partial support for the conclusion that the other information pointed at: that the four-profile solution was the best solution for variance-covariance specification C.…”
Section: Both Diagonal Variance-covariance Specifications (A and B)mentioning
confidence: 88%
“…Table 1 displays the model fit statistics for the nine estimated latent profile models. Assessing model fit for mixture models can be difficult and complex 34. McArdle et al 35 first proposed the likelihood increment percentage (LIP) as a model fit statistic and it has subsequently been used by many researchers 36–38.…”
Section: Methodsmentioning
confidence: 99%
“…We used a semi-parametric finite mixture model (FMM) to identify data-driven cost groups using pre-transplantation annual inpatient costs. Conceptually, finite mixture models are probabilistic models that combine density functions and are based on a framework that treats observed data as coming from distinct but unobserved subpopulations [ 20 , 21 ]. An FMM examines sub-groups within a given patient population without imposing pre-defined groups on the observed data [ 20 , 21 ].…”
Section: Methodsmentioning
confidence: 99%
“…Conceptually, finite mixture models are probabilistic models that combine density functions and are based on a framework that treats observed data as coming from distinct but unobserved subpopulations [ 20 , 21 ]. An FMM examines sub-groups within a given patient population without imposing pre-defined groups on the observed data [ 20 , 21 ]. We did not know the number of latent cost groups a priori.…”
Section: Methodsmentioning
confidence: 99%