2019
DOI: 10.1080/02331888.2019.1662017
|View full text |Cite
|
Sign up to set email alerts
|

Large-scale multiple hypothesis testing with the normal-beta prime prior

Abstract: We revisit the problem of simultaneously testing the means of n independent normal observations under sparsity. We take a Bayesian approach to this problem by studying a scale-mixture prior known as the normal-beta prime (NBP) prior. To detect signals, we propose a hypothesis test based on thresholding the posterior shrinkage weight under the NBP prior. Taking the loss function to be the expected number of misclassified tests, we show that our test procedure asymptotically attains the optimal Bayes risk when t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…Datta and Ghosh (2013) proved that the decision rule induced by the horseshoe estimator is asymptotically Bayes optimal for multiple testing under 0-1 loss up to a multiplicative constant. This result was generalized to include other global-local priors by Ghosh et al (2016) and Bai and Ghosh (2018), among others. Pas et al (2014) showed the horseshoe estimator is minimax in 2 in a nearly-black case up to a constant.…”
Section: Theoretical Properties In Linear Gaussian Modelsmentioning
confidence: 95%
“…Datta and Ghosh (2013) proved that the decision rule induced by the horseshoe estimator is asymptotically Bayes optimal for multiple testing under 0-1 loss up to a multiplicative constant. This result was generalized to include other global-local priors by Ghosh et al (2016) and Bai and Ghosh (2018), among others. Pas et al (2014) showed the horseshoe estimator is minimax in 2 in a nearly-black case up to a constant.…”
Section: Theoretical Properties In Linear Gaussian Modelsmentioning
confidence: 95%
“…p, is unbounded with a singularity at zero for any 0 < a ≤ 1/2. [3]. Proposition 2.1 implies that in order to facilitate sparse recovery of β, we should set the hyperparameter a to be a small value.…”
Section: )mentioning
confidence: 99%
“…We call our model the normal-beta prime (NBP) prior. Bai and Ghosh [3] previously studied the NBP model in the context of multiple hypothesis testing of normal means. Here, we extend the NBP prior to high-dimensional linear regression (1.1).…”
Section: Introductionmentioning
confidence: 99%
“…The decision to threshold wavelet coefficients to zero may be seen as a form of variable selection with respect to the wavelet basis representation (e.g., see Stingo, Vannucci, & Downey, 2012). Therefore, other recent priors for Bayesian variable selection such as the horseshoe prior (Carvalho, Polson, & Scott, 2010), the Dirichlet–Laplace prior (Bhattacharya, Pati, Pillai, & Dunson, 2015), and the normal‐beta prime prior (Bai & Ghosh, 2019) may be considered as priors for wavelet coefficients. We note that wavelet representations of signals are usually sparse, that is, few wavelet coefficients contain most of the information and thus many wavelet coefficients may be set to zero without much loss of information.…”
Section: Multiscale Decompositionsmentioning
confidence: 99%