2011 IEEE International Symposium on Information Theory Proceedings 2011
DOI: 10.1109/isit.2011.6033975
|View full text |Cite
|
Sign up to set email alerts
|

Low-rank matrix recovery from errors and erasures

Abstract: This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both (a) erasures: most entries are not observed, and (b) errors: values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when a (natural) recently proposed method, based on convex optimization, succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
116
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 58 publications
(120 citation statements)
references
References 12 publications
4
116
0
Order By: Relevance
“…From the first three conditions, one can conclude that the pair (A, E * ) is an optimal solution of (P ) by a direct application of the KKT conditions. Uniqueness can be established by standard arguments regarding transverse intersections of the subspace Ω and the invariant spaces of G; see [6,Proposition 2] and [8,Lemma 6]. Conditions 1, 2, and 3 of this lemma essentially require that the subdifferential at a matrix specifying the edits with respect to the 1 norm has a nonempty intersection with the relative interior of the normal cone at an adjacency matrix representing G with respect to the Schur-Horn orbitope.…”
Section: Resultsmentioning
confidence: 99%
“…From the first three conditions, one can conclude that the pair (A, E * ) is an optimal solution of (P ) by a direct application of the KKT conditions. Uniqueness can be established by standard arguments regarding transverse intersections of the subspace Ω and the invariant spaces of G; see [6,Proposition 2] and [8,Lemma 6]. Conditions 1, 2, and 3 of this lemma essentially require that the subdifferential at a matrix specifying the edits with respect to the 1 norm has a nonempty intersection with the relative interior of the normal cone at an adjacency matrix representing G with respect to the Schur-Horn orbitope.…”
Section: Resultsmentioning
confidence: 99%
“…where a fraction of the elements of the low-rank matrix are possibly arbitrarily modified, while others are untouched). Sparse and low-rank matrix decomposition using convex optimization was initiated by [24,25]; follow-up works [26,27] have the current state-of-the-art guarantees on this problem, and [28] applies it directly to graph clustering.…”
Section: B Related Workmentioning
confidence: 99%
“…Recovering low-rank and sparse matrices from incomplete or even corrupted observations is a common problem in many application areas, including statistics [1,9,51], bioinformatics [37], machine learning [28,47,49,52], computer vision [5,7,42,43,58], and signal and image processing [27,30,38]. In these areas, data often have high dimensionality, such as digital photographs and surveillance videos, which makes inference, learning, and recognition infeasible due to the "curse of dimensionality.…”
Section: Introductionmentioning
confidence: 99%