2011
DOI: 10.1214/11-aos876
|View full text |Cite
|
Sign up to set email alerts
|

Optimal selection of reduced rank estimators of high-dimensional matrices

Abstract: We introduce a new criterion, the Rank Selection Criterion (RSC), for selecting the optimal reduced rank estimator of the coefficient matrix in multivariate response regression models. The corresponding RSC estimator minimizes the Frobenius norm of the fit plus a regularization term proportional to the number of parameters in the reduced rank model.The rank of the RSC estimator provides a consistent estimator of the rank of the coefficient matrix; in general the rank of our estimator is a consistent estimate o… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

12
300
0
1

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 206 publications
(313 citation statements)
references
References 21 publications
12
300
0
1
Order By: Relevance
“…Since minimizing the rank function (7) is generally intractable, and the matrix nuclear norm is usually used as a tight convex surrogate of the matrix rank [35], the rank function (7) can be replaced with the tensor nuclear norm [27]:…”
Section: Nonlocal Low-rank Tensor Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…Since minimizing the rank function (7) is generally intractable, and the matrix nuclear norm is usually used as a tight convex surrogate of the matrix rank [35], the rank function (7) can be replaced with the tensor nuclear norm [27]:…”
Section: Nonlocal Low-rank Tensor Approximationmentioning
confidence: 99%
“…σ r (Z) is so-called the nuclear norm of matrix Z with size m × n. Although a convex tensor nuclear norm, such as (8), could provide satisfactory results in various tensor recovery problems, studies like [35] demonstrated that the matrix nuclear norm over-penalizes large singular values, and thus gives a biased estimator in low-rank structure learning. Fortunately, a folded-concave penalty [26] can be considered to remedy such modeling bias [25,26].…”
Section: Nonlocal Low-rank Tensor Approximationmentioning
confidence: 99%
“…The proposed algorithm requires an initial estimate, U ð0Þ , V ð0Þ , and d ð0Þ , which can be obtained from an initial estimate of C through the SVD. One plausible estimator is the reduced-rank least squares estimator (33,34), which is consistent for high-dimensional data (35). Another one is the ridge regression estimator in which a small identity matrix «I p is added to Σ to make it invertible.…”
Section: T-svd Model With the Svd Representation Of The Coefficient mentioning
confidence: 99%
“…Wenn die Faktoren unbekannt sind, kann zur Schätzung und Bestimmung des Modells die reduced-rank multivariate Regression angewendet werden, bei der sowohl die Ziel-, als auch die Eingangsgrößenüber eine Matrix gekoppelte Vektoren sind. Yuan et al (2007), Negahban and Wainwright (2011) and Bunea et al (2011) zeigen, dass unter Einsatz der Ky-Fan-Norm oder Rang Regularisierung die Anzahl der Faktoren mit hoher Wahrscheinlichkeit geschätzt werden kann. Allerdings konzentrieren sich die bisher untersuchten Modelle nur auf bedingte Erwartungswerte und geben wenig Informationenüber die bedingten Verteilungen.…”
Section: IIIunclassified
“…One remark is that for the traditional multivariate regression technique introduced in Reinsel and Velu (1998), the number of factor r is assumed to be known or has to be obtained via other method. However, using the modern regularization method of Yuan et al (2007), Bunea et al (2011) or Negahban and Wainwright (2011), knowing r is not necessary for estimation.…”
Section: Factorizable Sparse Multivariate Quantile Regressionmentioning
confidence: 99%