2016
DOI: 10.48550/arxiv.1610.05559
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Hyperprior Choice for the Global Shrinkage Parameter in the Horseshoe Prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 0 publications
0
13
0
Order By: Relevance
“…We propose a dynamic horseshoe process as the prior for the innovations ω t in model we scale by the observation error variance and the sample size (Piironen and Vehtari, 2016).…”
Section: Bayesian Trend Filtering With Dynamic Shrinkage Processesmentioning
confidence: 99%
“…We propose a dynamic horseshoe process as the prior for the innovations ω t in model we scale by the observation error variance and the sample size (Piironen and Vehtari, 2016).…”
Section: Bayesian Trend Filtering With Dynamic Shrinkage Processesmentioning
confidence: 99%
“…For the slope parameters β µ , we anticipated a set of sparse effects; i.e., of the large number of predictors and interactions between predictors, we assumed that many of the associated effects would be below the noise threshold, but some fraction will have significant effects on the survival time. Thus, we used a regularized horseshoe prior ( 23 ) to induce sparsity in the joint distribution of effect size parameters, β µ . This sparsity-inducing prior allows us to model the full range of pairwise interaction terms and their effects on survival, without making specific assumptions about which terms to keep and which to discard.…”
Section: Randomized Controlled Trials Are Inefficientmentioning
confidence: 99%
“…Thus, we used a regularized horseshoe prior (23) to induce sparsity in the joint distribution of effect size parameters, β µ . This sparsity-inducing prior allows us to model the full range of pairwise interaction terms and their effects on survival, without making specific assumptions about which terms to keep and which to discard.…”
mentioning
confidence: 99%
“…The shrinkage parameters λ j,k and λ j are common for all times t, and provide factor-and predictor-specific shrinkage: for each predictor j, λ j,k allows some factors k to be nonzero, while λ j operators as a group shrinkage parameter that may effectively remove predictor j from the model. Lastly, the global shrinkage parameter λ 0 controls the global level of sparsity, and is scaled by 1/ √ T − 1 following Piironen and Vehtari (2016). In the case of the non-dynamic FOSR and FOSR-AR models, we simply remove one level of the hierarchy:…”
Section: Shrinkage Priors For the Modelmentioning
confidence: 99%