Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2009
DOI: 10.5089/9781451873498.001
|View full text |Cite
|
Sign up to set email alerts
|

Benchmark Priors Revisited:on Adaptive Shrinkage and the Supermodel Effect in Bayesian Model Averaging

Abstract: This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Default prior choices fixing Zellner's g are predominant in the Bayesian Model Averaging literature, but tend to concentrate posterior mass on a tiny set of models. The paper demo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
167
0
6

Year Published

2011
2011
2017
2017

Publication Types

Select...
9

Relationship

3
6

Authors

Journals

citations
Cited by 127 publications
(175 citation statements)
references
References 23 publications
2
167
0
6
Order By: Relevance
“…Estimating all 2 22 possible specifications is computationally too demanding-therefore, we approximate the whole model space by using the Model Composition Markov Chain Monte Carlo algorithm (Madigan & York, 1995), which only traverses the most important part of the model space: that is, the models with high posterior model probabilities. Such a simplification is commonly applied in applications of BMA (see, for example, Feldkircher & Zeugner, 2009). …”
Section: Estimation and Resultsmentioning
confidence: 99%
“…Estimating all 2 22 possible specifications is computationally too demanding-therefore, we approximate the whole model space by using the Model Composition Markov Chain Monte Carlo algorithm (Madigan & York, 1995), which only traverses the most important part of the model space: that is, the models with high posterior model probabilities. Such a simplification is commonly applied in applications of BMA (see, for example, Feldkircher & Zeugner, 2009). …”
Section: Estimation and Resultsmentioning
confidence: 99%
“…With 2 32 possible combinations, it would take several months to estimate all the regressions, so our approach relies on a Monte Carlo Markov Chain algorithm that walks through the potential models (we use the bms R package by Feldkircher & Zeugner, 2009). For each model BMA computes a weight, called the posterior model probability, which is analogous to information criteria or adjusted R-squared and captures how well the model fits the data.…”
Section: Publication Characteristics To See Whether Published Studiesmentioning
confidence: 99%
“…Using Bayesian model averaging (BMA), we assessed the relative importance of predictors over the entire model space, which includes the effect of each predictor variable assessed independently or cumulatively with other variables [47]. The best-fitting model using a single parameter or combination of parameters was chosen as the one with the highest posterior probability distribution using the R [48] package BMS [49]. We identified the relative importance of variables over the entire rspb.royalsocietypublishing.org Proc.…”
Section: (B) Richness Tests and Island Biogeographymentioning
confidence: 99%
“…We also applied BMA in the R [48] package BMS [49] to determine which variables alone or in combination predict PSV rspb.royalsocietypublishing.org Proc. R. Soc.…”
Section: (D) Phylogenetic Testsmentioning
confidence: 99%