2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.426
|View full text |Cite
|
Sign up to set email alerts
|

Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
169
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 211 publications
(180 citation statements)
references
References 37 publications
1
169
0
Order By: Relevance
“…It was reported in [48] that for this data set, even using just two iterations of the solvers, that OMP took 783 minutes, and two SSC-1 methods (one based on the original SSC ADMM algorithm, without using Remark 1) either did not finish the two iterations within 7 days, or used more than 16 GB of memory. The results of running our models on this data (after normalizing the features to z-scores), and taking 2 steps as in [48], is presented in Table 2. We do not include results for SSC-1 with affine constraints, as with default parameters this model does not lead to a sparse C, and hence there are memory issues.…”
Section: Ssc-1 and Ssc-0 On The Covertype Data Setmentioning
confidence: 96%
“…It was reported in [48] that for this data set, even using just two iterations of the solvers, that OMP took 783 minutes, and two SSC-1 methods (one based on the original SSC ADMM algorithm, without using Remark 1) either did not finish the two iterations within 7 days, or used more than 16 GB of memory. The results of running our models on this data (after normalizing the features to z-scores), and taking 2 steps as in [48], is presented in Table 2. We do not include results for SSC-1 with affine constraints, as with default parameters this model does not lead to a sparse C, and hence there are memory issues.…”
Section: Ssc-1 and Ssc-0 On The Covertype Data Setmentioning
confidence: 96%
“…In this section, the performance of SR-SSC is evaluated using three large-scale and challenging data sets: handwritten digits (MNIST), object images (CIFAR10) and forest data set (Covertype). We compare the accuracy and running times of SR-SSC with standard SSC based on ADMM (this is SR-SSC with L = 1 and k = N ) and three state-of-the art methods for sparse subspace clustering, namely OMP [40], SSSC [26] (which is closely related to SR-SSC with L = 1) and ORGEN [39]; see Section 2 for a brief description of these methods. For OMP and ORGEN, we used the code available from http://vision.jhu.edu/code/.…”
Section: Real-world Data Setsmentioning
confidence: 99%
“…regularization they choose for sparse coefficients. For instance, Low Rank Representation (LRR) [7,8] based on nuclear norm represents the data points with the lowest-rank representation among all the candidates; Least Square Regression (LSR) [9] with 2 norm groups the highly correlated data; Exemplar-based Subspace Clustering (ESC) [10] based on 1 norm focuses on the class-imbalanced data; Elastic Net Subspace Clustering (ENSC) [11] adopts both 1 and 2 norm to find better coefficients; Subspace Learning by 0 -Induced Sparisty [12] employs proximal gradient descent to obtain a sub-optimal solution; while Sparse Subspace Clustering (SSC) [13] with the 1 norm calculates a sparse self-representation of data points. And Structured Sparse Clustering [14] learns both affinity and segmentation of SSC.…”
Section: Introductionmentioning
confidence: 99%