2016
DOI: 10.1080/10705511.2016.1252265
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

Abstract: Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indices such as AIC or BIC, and inference is done based on the selected best-fitting model. This practice does not account for the possibility that due to sampling variability, a different model might be selected as the preferred model in a new sample from the same population. A previous study illustrated a bootstrap approach to gauge this mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(28 citation statements)
references
References 32 publications
(50 reference statements)
0
28
0
Order By: Relevance
“…For the factorial variables (hypothesis, software package and multiple testing method), this was not possible because there is not a single coefficient for the factor; in addition, for software package and multiple testing methods, some bootstrap samples did not contain all values of the factor. For these variables we instead performed model comparison between the full model and a reduced model excluding each factor, and computed the proportion of times the full model was selected on the basis of the model selection criterion (using both Bayesian information criterion and Akaike information criterion) being numerically lower in the full model 27 .…”
Section: Factors Related To Analytical Variabilitymentioning
confidence: 99%
“…For the factorial variables (hypothesis, software package and multiple testing method), this was not possible because there is not a single coefficient for the factor; in addition, for software package and multiple testing methods, some bootstrap samples did not contain all values of the factor. For these variables we instead performed model comparison between the full model and a reduced model excluding each factor, and computed the proportion of times the full model was selected on the basis of the model selection criterion (using both Bayesian information criterion and Akaike information criterion) being numerically lower in the full model 27 .…”
Section: Factors Related To Analytical Variabilitymentioning
confidence: 99%
“…As both criteria led to the same results (with only slight differences in decimal places), we will only report the results based on ΔBIC here (see Table 5). Please note that the BIC tends to prefer models with less parameters, especially in small samples, and that fit indices are probabilistic, and not absolute, criteria (Lubke et al, 2017). For openness, an accentuated long-term effect fit comparably better than a reversed effect in the multivariate (ΔBICreversed-accentuated, multivariate = 5.51), but not in the univariate model (ΔBIC reversed-accentuated, univariate = 1.89).…”
Section: Analyses Across T1-t2-t3-t4mentioning
confidence: 99%
“…Hastie et al (2001) claimed that the SRM method performs poorly and suggested that AIC results in a superior performance. Lubke et al (2017) performed a simulation study for selecting a model via a bootstrap approach. In addition, Vrieze (2012) addressed the difference between the statistics AIC and BIC focusing on latent variable models.…”
Section: Introductionmentioning
confidence: 99%