2020
DOI: 10.48550/arxiv.2001.08465
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Shrinkage with Robustness: Log-Adjusted Priors for Sparse Signals

Abstract: We introduce a new class of distributions named log-adjusted shrinkage priors for the analysis of sparse signals, which extends the three parameter beta priors by multiplying an additional log-term to their densities. The proposed prior has density tails that are heavier than even those of the Cauchy distribution and realizes the tail-robustness of the Bayes estimator, while keeping the strong shrinkage effect on noises. We verify this property via the improved posterior mean squared errors in the tail. An int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
(17 reference statements)
0
1
0
Order By: Relevance
“…Under Gaussian sequence or normal linear regression models, there have been proposed a variety of shrinkage priors including the most famous horseshoe (Carvalho et al, 2010) prior and related priors (e.g. Armagan et al, 2013;Bhadra, et al, 2017;Bhattacharya et al, 2015;Hamura et al, 2020;Zhang et al, 2020). This prior has an attractive shrinkage property, making it possible to strongly shrink small observations toward zero while keeping large observations unshrunk.…”
Section: Introductionmentioning
confidence: 99%
“…Under Gaussian sequence or normal linear regression models, there have been proposed a variety of shrinkage priors including the most famous horseshoe (Carvalho et al, 2010) prior and related priors (e.g. Armagan et al, 2013;Bhadra, et al, 2017;Bhattacharya et al, 2015;Hamura et al, 2020;Zhang et al, 2020). This prior has an attractive shrinkage property, making it possible to strongly shrink small observations toward zero while keeping large observations unshrunk.…”
Section: Introductionmentioning
confidence: 99%