2017
DOI: 10.1177/0049124117729704
|View full text |Cite
|
Sign up to set email alerts
|

The Difference Between Instability and Uncertainty: Comment on Young and Holsteen (2017)

Abstract: Young and Holsteen (YH) introduce a number of tools for evaluating model uncertainty. In so doing, they are careful to differentiate their method from existing forms of model averaging. The fundamental difference lies in the way in which the underlying estimates are weighted. Whereas standard approaches to model averaging assign higher weight to better fitting models, the YH method weights all models equally. As I show, this is a nontrivial distinction, in that the two sets of procedures tend to produce radica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(27 citation statements)
references
References 31 publications
0
12
0
Order By: Relevance
“…Under-weighting the right set of controls may lead to substantial biases. 7 To address this issue, BSCA displays a score (middle panel) for each control set (bottom panel), from most to least supported by the data, in addition to the corresponding coefficient estimates (top panel). The score is a transformation of the Extended Bayesian Information Criterion (EBIC) roughly interpretable as the probability of each control set given the data (Supplementary Information).…”
Section: Averaging Control Specificationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Under-weighting the right set of controls may lead to substantial biases. 7 To address this issue, BSCA displays a score (middle panel) for each control set (bottom panel), from most to least supported by the data, in addition to the corresponding coefficient estimates (top panel). The score is a transformation of the Extended Bayesian Information Criterion (EBIC) roughly interpretable as the probability of each control set given the data (Supplementary Information).…”
Section: Averaging Control Specificationsmentioning
confidence: 99%
“…We briefly illustrate the above-mentioned issues (see Giudice & Gangestad and Selz for further discussion) 6,7 -and how BSCA addresses them -by re-assessing Orben & Przybylski's study. Our main contribution is providing a data analysis methodology that ameliorates pitfalls in SCA-based inference and hence amplifies its potential.…”
mentioning
confidence: 98%
“…Young & Holsteen, 2017). Such simple averaging assumes equal weight among all models, in short, making the assumptions that all models are equal (see also Slez, 2017). Furthermore, such approaches not only treat all models as equal but all models as equally valid.…”
Section: Pre-registration and Model Fitmentioning
confidence: 99%
“…Since the model space is built on theory and past research, wouldn't we want to weight these models by their plausibility even without a diverse group of social scientists? If we weighted the plausibility of models, we would likely find that a plausibility-weighted model robustness procedure resulted in more "concentrated" results, to use the term found in Slez (2017). The plausibility-weighted computational model robustness analysis would likely result in more stable results than the unweighted procedure used by MY, even for the same set of plausible variables.…”
Section: Plausible Models and The Degree Of Plausibilitymentioning
confidence: 99%