2020
DOI: 10.1080/07350015.2020.1713796
|View full text |Cite
|
Sign up to set email alerts
|

Inducing Sparsity and Shrinkage in Time-Varying Parameter Models

Abstract: Time-varying parameter (TVP) models have the potential to be over-parameterized, particularly when the number of variables in the model is large. Global-local priors are increasingly used to induce shrinkage in such models. But the estimates produced by these priors can still have appreciable uncertainty. Sparsification has the potential to remove this uncertainty and improve forecasts. In this paper, we develop computationally simple methods which both shrink and sparsify TVP models. In a simulated data exerc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
66
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 65 publications
(66 citation statements)
references
References 47 publications
0
66
0
Order By: Relevance
“…(2020) model the time‐varying regressison coefficients using a reduced‐rank structure, and Huber et al . (2020) develop a method that first shrinks the time‐varying coefficients, followed by setting the small values to zero.…”
Section: Extensions and New Dma Modelsmentioning
confidence: 99%
“…(2020) model the time‐varying regressison coefficients using a reduced‐rank structure, and Huber et al . (2020) develop a method that first shrinks the time‐varying coefficients, followed by setting the small values to zero.…”
Section: Extensions and New Dma Modelsmentioning
confidence: 99%
“…One could ask whether it makes a big difference to zero out different a i 's as opposed to setting them close to zero. Setting them close but not exactly to zero essentially implies that there exists a lower bound of accuracy one can achieve under the specific prior distribution (Huber et al, 2020). For small‐scale systems, this has negligible implications on predictive accuracy.…”
Section: Achieving Sparsity In Var Modelsmentioning
confidence: 99%
“…Apart from reduced flexibility in terms of covariate selection across equations, typical shrinkage priors push many VAR coefficients toward zero. Under continuous shrinkage priors, however, this implies that the probability of observing a coefficient that exactly equals zero is zero (see, e.g., Bhattacharya, Pati, Pillai, & Dunson, 2015; Carvalho, Polson, & Scott, 2010; Griffin & Brown, 2010; Huber & Feldkircher, 2019; Huber, Koop, & Onorante, 2020). Spike and slab priors allow for shrinking coefficients exactly to zero.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations