2016
DOI: 10.1080/10705511.2016.1196108
|View full text |Cite
|
Sign up to set email alerts
|

Measurement Invariance Testing Across Between-Level Latent Classes Using Multilevel Factor Mixture Modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
36
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 39 publications
(40 citation statements)
references
References 51 publications
4
36
0
Order By: Relevance
“…Expressing ARB and AARB as a percentage is helpful for interpreting the magnitude of bias but guidelines for values indicating significant bias are informal. For example, Curran, West and Finch (1996) cited Kaplan (1989) in treating ARB > 10% for chi-square statistics as indicating significant bias; Hoogland and Boomsa (1998) treated ARB > 5% as biased for factor loadings and ARB > 10% as biased for standard errors, as did Kim, Joo, Lee, Wang, and Stark (2016) for factor loadings; Jin, Luo, and Yang-Wallentin (2016) treated ARB > 5% as biased for factor loadings, and Bai and Poon (2009) treated AARB > 2.5% for slopes as showing significant bias and AARB > 5% for standard errors. Guidelines for characterizing AB and AAB values as showing evidence of significant bias are unique to individual Monte Carlo studies (e.g., Yuan et al, 2015).…”
Section: Bias and Rmse Outcomes In Monte Carlo Studiesmentioning
confidence: 99%
“…Expressing ARB and AARB as a percentage is helpful for interpreting the magnitude of bias but guidelines for values indicating significant bias are informal. For example, Curran, West and Finch (1996) cited Kaplan (1989) in treating ARB > 10% for chi-square statistics as indicating significant bias; Hoogland and Boomsa (1998) treated ARB > 5% as biased for factor loadings and ARB > 10% as biased for standard errors, as did Kim, Joo, Lee, Wang, and Stark (2016) for factor loadings; Jin, Luo, and Yang-Wallentin (2016) treated ARB > 5% as biased for factor loadings, and Bai and Poon (2009) treated AARB > 2.5% for slopes as showing significant bias and AARB > 5% for standard errors. Guidelines for characterizing AB and AAB values as showing evidence of significant bias are unique to individual Monte Carlo studies (e.g., Yuan et al, 2015).…”
Section: Bias and Rmse Outcomes In Monte Carlo Studiesmentioning
confidence: 99%
“…The BIC showed excellent performance in identifying the number of classes when class separation was large and sample size was large (Nylund et al, 2007 ; Lubke and Neale, 2008 ; Li et al, 2009 ). When both class separation was low and sample size was small, the BIC tended to under-extract latent classes (Kim et al, 2016 ). The HBIC showed similar or slightly better performance than the BIC.…”
Section: Discussionmentioning
confidence: 99%
“…For example, Harring et al (2012) used an absolute bias cutoff of .05 for structural equation modeling estimates; Jin et al (2016) and Kim et al (2016) used a relative bias cutoff of 5% for estimated factor loadings; Leite and Beretvas (2010) used 5% when examining bias after imputing missing Likert-type data; Li et al (2011) used 5% when evaluating bias in estimated correlations; Wang et al (2012) used 5% in their study of the impact of violating factor scaling assumptions, and Ye and Daniel (2017) used 5% for assessing bias in cross-classified random effect models as did Meyers and Beretvas (2006) and Chung et al (2018).…”
Section: Biasmentioning
confidence: 99%
“…The rationale for these cutoffs is not statistical but simply that they were used in previous Monte Carlo studies. For example, Myers and Beretvas (2006), Li et al (2011), Leite and Beretvas (2010), Harring et al (2012), Wang et al (2012), Kim et al (2016), Ye and Daniel (2017), and Chung et al (2018) cited Hoogland and Boomsa (1998) as the basis of employing a .05 or 5% cutoff. Ironically, the rationale offered by Hoogland and Boomsa was arbitrary: "A boundary for acceptance of .05 is often used in robustness studies."…”
Section: Biasmentioning
confidence: 99%