2012 11th International Conference on Machine Learning and Applications 2012
DOI: 10.1109/icmla.2012.69
|View full text |Cite
|
Sign up to set email alerts
|

Compressive Clustering of High-Dimensional Data

Abstract: In this paper we focus on realistic clustering problems where the input data is high-dimensional and the clusters have complex, multimodal distribution. In this challenging setting the conventional methods, such as k-centers family, hierarchical clustering or those based on model fitting, are inefficient and typically converge far from the globally optimal solution. As an alternative, we propose a novel unsupervised learning approach which is based on the compressive sensing paradigm. The key idea underlying o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 19 publications
(21 reference statements)
0
8
0
Order By: Relevance
“…Keeping the first parameter's value fixed, we minimize the second parameter; and then do vice versa. Thus the new of c is found as [23]: c= ∑ i ∑ j ǁX j -v j (X i )ǁ 2 (16) After applying (16) to all datasets, we are able to find out minimum value in (14). Following is the pseudo-code of the proposed algorithm.…”
Section: A the Proposed Clustering Algorithmmentioning
confidence: 99%
See 3 more Smart Citations
“…Keeping the first parameter's value fixed, we minimize the second parameter; and then do vice versa. Thus the new of c is found as [23]: c= ∑ i ∑ j ǁX j -v j (X i )ǁ 2 (16) After applying (16) to all datasets, we are able to find out minimum value in (14). Following is the pseudo-code of the proposed algorithm.…”
Section: A the Proposed Clustering Algorithmmentioning
confidence: 99%
“…In the K-SVD algorithm, we solve (8) iteratively, using two stages, parallel to those in K-Means [22]. The initial step of the dictionary training method can be selected by taking ( dimensional) K N vectors of large ECG dataset as Y by generating K atoms randomly [23,24]. The stop condition can be defined either as the time of iteration process or error ratio or the combination of both.…”
Section: K-svd Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…The reconstruction process of feature vector is done by solving a 1 minimization  least-square problem as [19]:…”
Section: Advanced K-means Algorithmmentioning
confidence: 99%