2007
DOI: 10.1007/s10618-006-0060-8
|View full text |Cite
|
Sign up to set email alerts
|

Locally adaptive metrics for clustering high dimensional data

Abstract: Clustering suffers from the curse of dimensionality, and similarity functions that use all input features with equal relevance may not be effective. We introduce an algorithm that discovers clusters in subspaces spanned by different combinations of dimensions via local weightings of features. This approach avoids the risk of loss of information encountered in global dimensionality reduction techniques, and does not assume any data distribution model. Our method associates to each cluster a weight vector, whose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
169
0
4

Year Published

2009
2009
2015
2015

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 215 publications
(175 citation statements)
references
References 25 publications
(31 reference statements)
2
169
0
4
Order By: Relevance
“…Besides mean shift and DBSCAN [7,10], we also performed comparisons against several other clustering algorithms: OPTICS [1,6], k-means [21], LAC [9], Aver-l (average-linkage clustering) [15], and EM (with a Gaussian mixture) [8]. OPTICS is a density based clustering algorithm which creates an augmented ordering of the data representing its clustering structure, and then retrieves DB-SCAN clusters as the final clustering result.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides mean shift and DBSCAN [7,10], we also performed comparisons against several other clustering algorithms: OPTICS [1,6], k-means [21], LAC [9], Aver-l (average-linkage clustering) [15], and EM (with a Gaussian mixture) [8]. OPTICS is a density based clustering algorithm which creates an augmented ordering of the data representing its clustering structure, and then retrieves DB-SCAN clusters as the final clustering result.…”
Section: Methodsmentioning
confidence: 99%
“…k-means, LAC, Aver-l, and EM require the number of clusters in input, which we set equal to the number of classes in the data. LAC requires an additional parameter (weight of the regularization term; see [9] for details), which we set to 0.2 throughout our experiments. Mean shift, DBSCAN, OPTICS, and Aver-l are deterministic for fixed parameter values.…”
Section: Methodsmentioning
confidence: 99%
“…This TWvFKM weights both views and individual variables and is an extension to W-k-means. Domeniconi et al [15] have proposed the Locally Adaptive Clustering (LAC) algorithm which assigns a weight to each variable in each cluster.…”
Section: Fuzzy -Kmean Of Clustering Algorithmmentioning
confidence: 99%
“…The computation stops when a quality criterion is satisfied or when a maximum number of iterations is achieved. Examples of kmeans like methods are: K-Harmonic Means [21], CURLER [20], LAC [6] and LWC/CLWC [4].…”
Section: Clusteringmentioning
confidence: 99%