2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.322
| View full text |Cite
|
Sign up to set email alerts
|

Abstract: Principal Component Analysis (PCA) is the most widely used tool for linear dimensionality reduction and clustering. Still it is highly sensitive to outliers and does not scale well with respect to the number of data samples. Robust PCA solves the first issue with a sparse penalty term. The second issue can be handled with the matrix factorization model, which is however non-convex. Besides, PCA based clustering can also be enhanced by using a graph of data similarity. In this article, we introduce a new model … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
86
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(87 citation statements)
references
References 23 publications
0
86
0
1
Order By: Relevance
“…• Manifold RPCA (MRPCA) [62]. Rather than learn the graph from clean data, MRPCA constructs the graph from raw data according to some metrics.…”
Section: Comparison Methodsmentioning
confidence: 99%
“…• Manifold RPCA (MRPCA) [62]. Rather than learn the graph from clean data, MRPCA constructs the graph from raw data according to some metrics.…”
Section: Comparison Methodsmentioning
confidence: 99%
“…Wright et al [40] and Candès et al [6] proved that, under some mild conditions, the convex relaxation formulation (3) can exactly recover the low-rank and sparse matrices (L * , S * ) with high probability. The formulation (3) has been widely used in many computer vision applications, such as object detection and background subtraction [17], image alignment [50], low-rank texture analysis [29], image and video restoration [51], and subspace clustering [27]. This is mainly because the optimal solutions of the sub-problems involving both terms in (3) can be obtained by two wellknown proximal operators: the singular value thresholding (SVT) operator [23] and the soft-thresholding operator [52].…”
Section: Convex Nuclear Norm Minimizationmentioning
confidence: 99%
“…Generally, the p -norm (0 < p < 1) leads to a non-convex, non-smooth, and non-Lipschitz optimization problem [39]. Fortunately, we can efficiently solve (27) by introducing the following half-thresholding operator [21]. Proposition 1.…”
Section: Updating S K+1mentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, we convert these images to vectors of 784 dimension. In order to construct the graph constraint for our proposed model and GRPCA, we adopt the same way as in [30] and the input graph is calculated from the 5-nearest neighbors. Note that we utilize the Gaussian kernel function with γ = 0.007 on digit "2" and Polynomial Kernel Function with d = 2 on digit "3" .…”
Section: Data Recovery With Graph Constraintmentioning
confidence: 99%