2013
DOI: 10.1111/rssb.12031
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Matrix Regression

Abstract: Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensional… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

4
187
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 179 publications
(195 citation statements)
references
References 51 publications
(101 reference statements)
4
187
0
Order By: Relevance
“…The most direct approach for SVT is applying full SVD through svd and then soft-threshold the singular values. This approach is in practice used in many matrix learning problems according to the distributed code, e.g., Kalofolias, Bresson, Bronstein, and Vandergheynst (2014); Chi et al (2013); Parikh and Boyd (2013) ;Yang, Wang, Zhang, and Zhao (2013);Zhou, Liu, Wan, and Yu (2014); Zhou and Li (2014); Zhang et al (2017); Otazo, Candès, and Sodickson (2015); Goldstein, Studer, and Baraniuk (2015), to name a few. However, the built-in function svd is for full SVD of a dense matrix, and hence is very time-consuming and computationally expensive for large-scale problems.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The most direct approach for SVT is applying full SVD through svd and then soft-threshold the singular values. This approach is in practice used in many matrix learning problems according to the distributed code, e.g., Kalofolias, Bresson, Bronstein, and Vandergheynst (2014); Chi et al (2013); Parikh and Boyd (2013) ;Yang, Wang, Zhang, and Zhao (2013);Zhou, Liu, Wan, and Yu (2014); Zhou and Li (2014); Zhang et al (2017); Otazo, Candès, and Sodickson (2015); Goldstein, Studer, and Baraniuk (2015), to name a few. However, the built-in function svd is for full SVD of a dense matrix, and hence is very time-consuming and computationally expensive for large-scale problems.…”
Section: Introductionmentioning
confidence: 99%
“…The problem has sparked intensive research in recent years and is enjoying a broad range of applications such as personalized recommendation system (ACM SIGKDD and Netflix 2007) and imputation of massive genomics data (Chi, Zhou, Chen, Del Vecchyo, and Lange 2013). In matrix regression (Zhou and Li 2014), the predictors are two dimensional arrays such as images or measurements on a regular grid. Thus it requires a regression coefficient array of same size to completely capture the effects of matrix predictors.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The true signal often has low rank, which can be well approximated by a low rank matrix. Recently, Zhou and Li [5] proposed the so-called regularized matrix regression model to deal with these matrix form data, which is based on spectral regularization. This model includes the well-known Lasso as a special case; see [8] for more details.…”
Section: Introductionmentioning
confidence: 99%
“…At the same time, the random noises are not always normal. These complex stochastic data are frequently collected in a large variety of research areas such as information technology, engineering, medical imaging and diagnosis, and finance [1][2][3][4][5][6][7]. For instance, a wellknown example is the study of an electroencephalography data set of alcoholism.…”
Section: Introductionmentioning
confidence: 99%