2023
DOI: 10.1111/ijcs.12906
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying model selection uncertainty via bootstrapping and Akaike weights

Abstract: Picking one 'winner' model for researching a certain phenomenon while discarding the rest implies a confidence that may misrepresent the evidence. Multimodel inference allows researchers to more accurately represent their uncertainty about which model is 'best'. But multimodel inference, with Akaike weights-weights reflecting the relative probability of each candidate model-and bootstrapping, can also be used to quantify model selection uncertainty, in the form of empirical variation in parameter estimates acr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 59 publications
(97 reference statements)
0
1
0
Order By: Relevance
“…High uncertainty implies that a measurement is consistent with a wide range of plausible values for the measurand, which may lead to a wide range of obtained values across different measurements and studies. Quantifying and managing this uncertainty are major challenges—as recent research has highlighted in related contexts (Rigdon & Sarstedt, 2022; Rigdon et al, 2020; Rigdon et al, 2023).…”
Section: Recommendations Concerning the Use Of Silicon Samplesmentioning
confidence: 99%
“…High uncertainty implies that a measurement is consistent with a wide range of plausible values for the measurand, which may lead to a wide range of obtained values across different measurements and studies. Quantifying and managing this uncertainty are major challenges—as recent research has highlighted in related contexts (Rigdon & Sarstedt, 2022; Rigdon et al, 2020; Rigdon et al, 2023).…”
Section: Recommendations Concerning the Use Of Silicon Samplesmentioning
confidence: 99%
“…Furthermore, Akaike weights based on AIC facilitate the development of model-averaged predictions under conditions of model selection uncertainty. For example, Rigdon et al (2023) proposed an approach to quantify the uncertainty of competitive models based on Akaike weights and bootstrapping. The uncertainty perspective provides a new dimension to assess such models, indicating evidence for or against their generalizability.…”
Section: Industrial Management and Data Systemsmentioning
confidence: 99%
“…They then recommend, as a second step, using a combination of Akaike weights and bootstrapping to measure the level of uncertainty in the model effects caused by the mediator. This approach adds a new perspective to evaluating mediation models as it helps assess the potential generalizability of the research results (Rigdon et al ., 2023).…”
Section: Essential Pls-sem Analytical Tools and Metricsmentioning
confidence: 99%
“…Addressing this concern, Rigdon et al (2023) recently introduced a procedure to quantify this model selection Article was managed and handled by Maria Petrescu as editor.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers can draw on this approach to ascertain whether the consideration of different model configurations has the potential to decrease or bears the risk of increasing uncertainty in model estimates. Rigdon et al (2023) evaluate and showcase their approach in standard model comparison settings where researchers explicitly hypothesize different model configurations. However, the approach's relevance extends beyond such standard model comparisons-which researchers rarely document in their published research anyway-to much more visible modeling practices such as mediation.…”
Section: Introductionmentioning
confidence: 99%