2019
DOI: 10.48550/arxiv.1907.01333
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Global-local Shrinkage Priors for Count Data

Abstract: Global-local shrinkage prior has been recognized as useful class of priors which can strongly shrink small signals towards prior means while keeping large signals unshrunk. Although such priors have been extensively discussed under Gaussian responses, we intensively encounter count responses in practice in which the previous knowledge of global-local shrinkage priors cannot be directly imported. In this paper, we discuss global-local shrinkage priors for analyzing sequence of counts. We provide sufficient cond… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…For the heavily-tailed distribution that comprises the finite mixture, Student's tdistribution is still regarded thin-tailed for its outlier sensitivity. We propose the use of distributions that has been utilized in the robust inference for high-dimensional count data (Hamura et al, 2019) for their extremely-heavy tails. This is another scale mixture of normals by the gamma distribution with the hierarchical structure on shape parameters, which allows the posterior inference by a simple but efficient Gibbs sampler.…”
Section: Introductionmentioning
confidence: 99%
“…For the heavily-tailed distribution that comprises the finite mixture, Student's tdistribution is still regarded thin-tailed for its outlier sensitivity. We propose the use of distributions that has been utilized in the robust inference for high-dimensional count data (Hamura et al, 2019) for their extremely-heavy tails. This is another scale mixture of normals by the gamma distribution with the hierarchical structure on shape parameters, which allows the posterior inference by a simple but efficient Gibbs sampler.…”
Section: Introductionmentioning
confidence: 99%