2010 IEEE International Symposium on Information Theory 2010
DOI: 10.1109/isit.2010.5513535
|View full text |Cite
|
Sign up to set email alerts
|

Stable Principal Component Pursuit

Abstract: In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a highdimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
438
0
1

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 465 publications
(447 citation statements)
references
References 12 publications
5
438
0
1
Order By: Relevance
“…If the variable x is further constrained to be nonnegative, then the corresponding compressive sensing problem can be formulated as a three block (K = 3) convex separable optimization problem (1.1) by introducing a slack variable. Similarly, in the stable version of robust principal component analysis (PCA) [59], we are given an observation matrix M ∈ ℜ m×n which is a noise-corrupted sum of a low rank matrix L and a sparse matrix S. The goal is recover L and S by solving the following nonsmooth convex optimization problem minimize L * + ρ S 1 + λ Z 2 F subject to L + S + Z = M where · * denotes the matrix nuclear norm (defined as the sum of the matrix singular eigenvalues), while · 1 and · F denote, respectively, the ℓ 1 and the Frobenius norm of a matrix (equal to the standard ℓ 1 and ℓ 2 vector norms when the matrix is viewed as a vector). In the above formulation, Z denotes the noise matrix, and ρ, λ are some fixed penalty parameters.…”
Section: Introductionmentioning
confidence: 99%
“…If the variable x is further constrained to be nonnegative, then the corresponding compressive sensing problem can be formulated as a three block (K = 3) convex separable optimization problem (1.1) by introducing a slack variable. Similarly, in the stable version of robust principal component analysis (PCA) [59], we are given an observation matrix M ∈ ℜ m×n which is a noise-corrupted sum of a low rank matrix L and a sparse matrix S. The goal is recover L and S by solving the following nonsmooth convex optimization problem minimize L * + ρ S 1 + λ Z 2 F subject to L + S + Z = M where · * denotes the matrix nuclear norm (defined as the sum of the matrix singular eigenvalues), while · 1 and · F denote, respectively, the ℓ 1 and the Frobenius norm of a matrix (equal to the standard ℓ 1 and ℓ 2 vector norms when the matrix is viewed as a vector). In the above formulation, Z denotes the noise matrix, and ρ, λ are some fixed penalty parameters.…”
Section: Introductionmentioning
confidence: 99%
“…1(a)). For Sets 2, 3, and 5 with non-sparse E, the empirical optimal λ (0.002) is smaller than the theoretical λ * (0.003), contrary to the theory of [13]. At this lower λ, the ranks of the optimal A recovered by LrALM are still larger than the known value of 1 ( Fig.…”
Section: Experiments and Discussionmentioning
confidence: 74%
“…So the desired rank of the low-rank matrix is 1. LrALM and FrALM were tested on the test sets over a range of λ from 0.0001 to 0.5, including the theoretical optimal λ of √ m = 0.003, denoted as λ * , as proved in [13]. The parameters ρ and initial µ were set to the default values of 6 and 0.5/σ 1 , where σ 1 is the largest singular value of the initial Y, as for LrALM.…”
Section: Experiments and Discussionmentioning
confidence: 99%
See 2 more Smart Citations