2016
DOI: 10.1093/biomet/asw036
|View full text |Cite
|
Sign up to set email alerts
|

Sparse envelope model: efficient estimation and response variable selection in multivariate linear regression

Abstract: Envelope methodology can provide substantial efficiency gains in multivariate statistical problems, but in some applications the estimation of the envelope dimension can induce selection volatility that may mitigate those gains. Current envelope methodology does not account for the added variance that can result from this selection. In this article, we circumvent dimension selection volatility through the development of a weighted envelope estimator. Theoretical justification is given for our estimator and val… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(39 citation statements)
references
References 28 publications
0
39
0
Order By: Relevance
“…Therefore, for estimation we modify the objective functions in equations (4.4) and (4.5) tofL|2(G)=logfalse|GnormalTboldSres|2boldGfalse|+logfalse|GnormalTboldSY|21boldGfalse|+italicλ1i=1rw1,ifalse|false|Gi||2,fR|1(U)=logfalse|UnormalTboldSres|1boldUfalse|+logfalse|UnormalTboldSY|11boldUfalse|+italicλ2j=1mw2,jfalse|false|Uj||2,where G i is the i th row of G and U j is the j th row of U . We attain the minimization of fL|2false(boldGfalse) and fR|1false(boldUfalse) by a non‐Grassman and blockwise co‐ordinate descent algorithm (Su et al ., ; Cook et al ., ) to update G and U iteratively until convergence. Once L and R have been estimated, the rest of the parameters are estimated in the same way as in the non‐sparse setting.…”
Section: Sparse Matrix Variate Regressionmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, for estimation we modify the objective functions in equations (4.4) and (4.5) tofL|2(G)=logfalse|GnormalTboldSres|2boldGfalse|+logfalse|GnormalTboldSY|21boldGfalse|+italicλ1i=1rw1,ifalse|false|Gi||2,fR|1(U)=logfalse|UnormalTboldSres|1boldUfalse|+logfalse|UnormalTboldSY|11boldUfalse|+italicλ2j=1mw2,jfalse|false|Uj||2,where G i is the i th row of G and U j is the j th row of U . We attain the minimization of fL|2false(boldGfalse) and fR|1false(boldUfalse) by a non‐Grassman and blockwise co‐ordinate descent algorithm (Su et al ., ; Cook et al ., ) to update G and U iteratively until convergence. Once L and R have been estimated, the rest of the parameters are estimated in the same way as in the non‐sparse setting.…”
Section: Sparse Matrix Variate Regressionmentioning
confidence: 99%
“…In particular, Su et al . () studied sparse multivariate regression with an envelope structure. However, no investigation has been done in the matrix variate setting.…”
Section: Sparse Matrix Variate Regressionmentioning
confidence: 99%
See 1 more Smart Citation
“…Other forms of singular value penalization were considered in, e.g., Mukherjee and Zhu (2011), Chen et al (2013), and Zhou and Li (2014). In addition, some recent efforts further improve low-rank methods by incorporating error covariance modeling, such as envelope models (Cook et al, 2015), or by utilizing variable selection (Bunea et al, 2012;Chen and Huang, 2012;Su et al, 2016).…”
Section: =1mentioning
confidence: 99%
“…Cook et al () incorporated the idea of enveloping into reduced‐rank regression and demonstrated efficiency gains. Su et al () developed a sparse envelope model that performs response variable selection under the envelope model. Khare et al () proposed a comprehensive Bayesian framework for estimation and model selection in envelope models.…”
Section: Introductionmentioning
confidence: 99%