2009
DOI: 10.2139/ssrn.1564378
|View full text |Cite
|
Sign up to set email alerts
|

VAR Forecasting Using Bayesian Variable Selection

Abstract: This paper develops methods for automatic selection of variables in Bayesian vector autoregressions (VARs) using the Gibbs sampler. In particular, I provide computationally efficient algorithms for stochastic variable selection in generic linear and nonlinear models, as well as models of large dimensions. The performance of the proposed variable selection method is assessed in forecasting three major macroeconomic time series of the UK economy. Data-based restrictions of VAR coefficients can help improve upon … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
57
0
4

Year Published

2014
2014
2020
2020

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(63 citation statements)
references
References 46 publications
2
57
0
4
Order By: Relevance
“…The resulting predictive equations closely resemble those obtained via Bayesian variable selection using discrete mixture prior. Korobilis (2013b) showed that, in a forecasting context, sparsity was highly competitive with traditional shrinkage methods. We saw in an empirical study that the predictive performance of the horseshoe prior is very close to that of the discrete mixture priors, and often beats it.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The resulting predictive equations closely resemble those obtained via Bayesian variable selection using discrete mixture prior. Korobilis (2013b) showed that, in a forecasting context, sparsity was highly competitive with traditional shrinkage methods. We saw in an empirical study that the predictive performance of the horseshoe prior is very close to that of the discrete mixture priors, and often beats it.…”
Section: Discussionmentioning
confidence: 99%
“…ijk y j,t−k ). Korobilis (2013b) notes that for the discrete mixture prior, this is Bayesian model averaging.…”
Section: Predictive Performancesmentioning
confidence: 99%
“…where − indexes all the elements of the vector = ( 1 , … , ) but the th, and the conjugate prior for each is thus the independent Bernoulli density. Exact expressions for the conditional densities can be found in Korobilis (2011). Here, we provide a pseudo-algorithm that shows that the algorithm for the restricted model only adds one block that samples the vector , in the standard algorithm of the unrestricted regression model:…”
Section: Model Selection and Regularizationmentioning
confidence: 99%
“…In particular, we use Stochastic Search Variable Selection (SSVS) priors (George & McCulloch, 1993) that favor shrinkage of the model parameters in an automatic fashion and require only minimal prior input from the researcher. As discussed in Korobilis (2011), sparsity in our model parameters can be induced at a minimal computational cost with small adjustments of the standard Gibbs sampler algorithm commonly used for Bayesian linear regressions (Koop, 2003). We discuss our model within a study in which we wish to characterize the shape changes of human mandible profiles over years.…”
Section: Introductionmentioning
confidence: 99%
“…Caner and Zhang, 2014), or macro-economics (e.g. Korobilis, 2013). A sparse cointegration approach is useful for several reasons.…”
Section: Introductionmentioning
confidence: 99%