2018
DOI: 10.1016/j.jmva.2018.04.010
|View full text |Cite
|
Sign up to set email alerts
|

High-dimensional multivariate posterior consistency under global–local shrinkage priors

Abstract: We consider sparse Bayesian estimation in the classical multivariate linear regression model with p regressors and q response variables. In univariate Bayesian linear regression with a single response y, shrinkage priors which can be expressed as scale mixtures of normal densities are popular for obtaining sparse estimates of the coefficients. In this paper, we extend the use of these priors to the multivariate case to estimate a p × q coefficients matrix B. We derive sufficient conditions for posterior consis… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
87
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(92 citation statements)
references
References 48 publications
4
87
1
Order By: Relevance
“…Following the arguments from Schwartz (1965) (also used in Bai and Ghosh (2018) and Armagan et al (2013)), the first condition in Theorem 2 can be shown to imply Φ n → 0 a.s. Further, e nC J 1n → 0 a.s., for a constant C > 0 that may depend on auxiliary parameters (such as Ψ and the eigenvalues of the design matrix) but not on B 0 . Similarly, the second condition of the Theorem 2 can be shown to imply that for any constant c > 0, e nc J 2n → ∞ a.s.…”
Section: Discussionmentioning
confidence: 96%
See 3 more Smart Citations
“…Following the arguments from Schwartz (1965) (also used in Bai and Ghosh (2018) and Armagan et al (2013)), the first condition in Theorem 2 can be shown to imply Φ n → 0 a.s. Further, e nC J 1n → 0 a.s., for a constant C > 0 that may depend on auxiliary parameters (such as Ψ and the eigenvalues of the design matrix) but not on B 0 . Similarly, the second condition of the Theorem 2 can be shown to imply that for any constant c > 0, e nc J 2n → ∞ a.s.…”
Section: Discussionmentioning
confidence: 96%
“…This relaxation comes at a cost, mainly assumption (A2), which essentially bounds the entries of the design matrix. In contrast, Bai and Ghosh (2018) assume upper and lower (asymptotic) bounds on the eigenvalues of the design matrix. However, the flexibility gained under our choice is significant, as we require no condition (except continuity) on the prior for B.…”
Section: Posterior Consistencymentioning
confidence: 99%
See 2 more Smart Citations
“…Hence, in this work we focus on developing a more exible variable selection method that encourages the inclusion of similar sets of predictors in each of the K models by extending the GL shrinkage framework. Recently, Bai and Ghosh (2018) independently explored a similar setup and proposed their Multivariate Bayesian Model with Shrinkage Priors (MBSP). We will discuss dierences that distinguish our work in later sections.…”
Section: Bayesian Variable Selection For Multi-outcome Models Throughmentioning
confidence: 99%