2019
DOI: 10.1093/biomet/asz056
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian sparse multiple regression for simultaneous rank reduction and variable selection

Abstract: We develop a Bayesian methodology aimed at simultaneously estimating low-rank and row-sparse matrices in a high-dimensional multiple-response linear regression model. We consider a carefully devised shrinkage prior on the matrix of regression coefficients which obviates the need to specify a prior on the rank, and shrinks the regression matrix towards low-rank and row-sparse structures. We provide theoretical support to the proposed methodology by proving minimax optimality of the posterior mean under the pred… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 52 publications
(90 reference statements)
0
12
0
Order By: Relevance
“…However, one downside of our approach is the over shrinkage of the loadings values of the recovered sparse models. A possible solution to this problem would be the adoption of a particular case of our method, in which the posterior means of the loadings are used as weights as in R. P. Hahn and Carvalho (2015), Chakraborty et al (2020) and Woody et al (2019), which would mitigate the problem of over shrinkage.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, one downside of our approach is the over shrinkage of the loadings values of the recovered sparse models. A possible solution to this problem would be the adoption of a particular case of our method, in which the posterior means of the loadings are used as weights as in R. P. Hahn and Carvalho (2015), Chakraborty et al (2020) and Woody et al (2019), which would mitigate the problem of over shrinkage.…”
Section: Discussionmentioning
confidence: 99%
“…The idea behind DSS is to explicit the trade-off between sparsity and the predictive performance of the model through a posterior summary obtained by sparse point-estimates generated from samples of the posterior distribution. In recent years, this framework has been extended in to a variety of statistical models (Puelz et al, 2017;Bashir et al, 2018;Woody et al, 2019;MacEachern and Miyawaki, 2019;Kowal and Bourgeois, 2020;Chakraborty et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…The features with nonzero coefficients are considered as key features. As typical examples, the least absolute shrinkage and selection operator (LASSO) method (Kok et al, 2019;Chakraborty et al, 2020) is commonly used in regression problems. Logistic regression and linear SVM are adopted in classification problems (Pes, 2019).…”
Section: Feature Selection Methodsmentioning
confidence: 99%
“…First, we employ continuous globallocal shrinkage priors for pushing the parameter space towards sparsity. However, as noted by Chakraborty et al (2020), such priors solely achieve approximate zeroes, and the probability of observing exact zeroes is zero. As a remedy, we post-process our posterior draws via minimizing Lasso-type loss functions to obtain sparse estimates for the cointegration matrix, the autoregressive parameters and the covariance matrix (see also Hahn and Carvalho, 2015;Ray and Bhattacharya, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Both limitations are often impractical and identifying restrictions on the cointegration relationships can be tedious in large-scale models from an applied perspective. 2 Relevant contributions in this context are Bunea et al (2012), Jochmann et al (2013), Eisenstat et al (2016), Huber and Zörner (2019), Chakraborty et al (2020), Hauzenberger et al (2020a,c), Huber et al (2020b).…”
Section: Introductionmentioning
confidence: 99%