2017
DOI: 10.1109/tit.2017.2653801
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Shrinkage of Singular Values

Abstract: We consider recovery of low-rank matrices from noisy data by shrinkage of singular values, in which a single, univariate nonlinearity is applied to each of the empirical singular values. We adopt an asymptotic framework, in which the matrix size is much larger than the rank of the signal matrix to be recovered, and the signal-to-noise ratio of the low-rank piece stays constant. For a variety of loss functions, including Mean Square Error (MSE -square Frobenius norm), the nuclear norm loss and the operator norm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
246
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 182 publications
(250 citation statements)
references
References 37 publications
(68 reference statements)
4
246
0
Order By: Relevance
“…the rank of the principal subspace is growing rather than fixed. (In the sibling problem of matrix denoising, compare the “spiked” setup [32, 31, 53] with the “fixed fraction” setup of [67]. )…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…the rank of the principal subspace is growing rather than fixed. (In the sibling problem of matrix denoising, compare the “spiked” setup [32, 31, 53] with the “fixed fraction” setup of [67]. )…”
Section: Discussionmentioning
confidence: 99%
“…In the code supplement [41] we provide Matlab code to compute the optimal nonlinearity for each of the 26 loss families discussed. In the sibling problem of singular value shrinkage for matrix denoising, [53] demonstrates numerical evaluation of optimal shrinkers for the Schatten- p norm, where analytical derivation of optimal shrinkers appears to be impossible.…”
Section: Optimal Shrinkage For Decomposable Lossesmentioning
confidence: 99%
“…Their estimate σˆijKS is based on grid‐search on a candidate set of σ. Recently, Gavish and Donoho () proposed another estimator σˆijMAD based on random matrix theory, which is defined as the median of the singular values of Xij divided by the square root of the median of the Marcenko‐Pastur distribution. Our simulations, not shown here, revealed that both σˆijKS and σˆijMAD well approximate the standard deviation of a true noise matrix when the data matrix consists of low rank signal, and we use σˆijMAD as a default throughout this paper for its simplicity.…”
Section: Methodsmentioning
confidence: 99%
“…Instead of using the eigenvalues λ i of PQ −1 or its inverse, we use regularized or shrinked eigenvalues [35][36][37]. For example, in light of (8), we can use the following shrinked eigenvalues:…”
Section: The Ab Log-det Divergence For Noisy and Ill-conditioned Covamentioning
confidence: 99%