Proceedings of the 2018 10th International Conference on Machine Learning and Computing 2018
DOI: 10.1145/3195106.3195118
|View full text |Cite
|
Sign up to set email alerts
|

An Optimized Chameleon Algorithm based on Local Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 8 publications
0
0
0
Order By: Relevance
“…The agglomerative hierarchical approaches begin with considering the individual point in data as a cluster, then continually fusing the most similar clusters until only one remains at the end. Two algorithms, BIRCH [54] and CHAMELEON, take advantage of this concept [16]. A divisive hierarchical clustering approach treats the entire data set as a single large cluster, then splits the most relevant clusters at each stage until a user-defined threshold of clusters is reached.…”
Section: Single Machine Clustering Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…The agglomerative hierarchical approaches begin with considering the individual point in data as a cluster, then continually fusing the most similar clusters until only one remains at the end. Two algorithms, BIRCH [54] and CHAMELEON, take advantage of this concept [16]. A divisive hierarchical clustering approach treats the entire data set as a single large cluster, then splits the most relevant clusters at each stage until a user-defined threshold of clusters is reached.…”
Section: Single Machine Clustering Techniquesmentioning
confidence: 99%
“…The SPMD (Single Program Multiple Data) technique is used to achieve the parallel paradigm in this program, and the message forwarding mechanism protects communication between processors. Using a work pool analogy, a parallel variant of the CHAMELEON method was described in [16]. This approach is divided into three stages-the first concerns using the concurrent K-Nearest Neighbor approach to reduce the study's temporal complexity.…”
Section: Parallel Clusteringmentioning
confidence: 99%