2008
DOI: 10.1007/978-3-540-88309-8_14
|View full text |Cite
|
Sign up to set email alerts
|

K-Means Initialization Methods for Improving Clustering by Simulated Annealing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…These include methods based on hierarchical clustering [72], genetic algorithms [73], simulated annealing [74,75], multiscale data condensation [76], and kd-trees [77]. Other interesting methods include the global k-means method [78], Kaufman and Rousseeuw's method [79], and the ROBIN method [80].…”
Section: Initializing the K-means Algorithmmentioning
confidence: 99%
“…These include methods based on hierarchical clustering [72], genetic algorithms [73], simulated annealing [74,75], multiscale data condensation [76], and kd-trees [77]. Other interesting methods include the global k-means method [78], Kaufman and Rousseeuw's method [79], and the ROBIN method [80].…”
Section: Initializing the K-means Algorithmmentioning
confidence: 99%
“…Pipeline hybridization means that algorithm “A” runs fully and its results are taken to algorithm B as inputs. This type of hybridization has been investigated in several papers concerning Kmeans and SA algorithms [ 18 , 44 , 45 ]. The clustering algorithm (Kmeans) will take all the points (messages) and cluster them based on their high similarities with the corresponding classes.…”
Section: Resultsmentioning
confidence: 99%
“…Kmeans can still get stuck into local optima. In the literature, several papers addressed how SA can be used for optimal selection of initial Kmeans points [ 37 , 38 ]. The output of these results is that metaheuristics can solve the initialization problem of Kmeans, but the problem of converging into local optima is still not solved.…”
Section: Methodsmentioning
confidence: 99%
“…In K-means clustering, the clusters are formed by minimizing the within cluster variance of the observations in each cluster [48]. Given a data set represented by [50]. If it is not chosen properly, then the algorithm may converge towards a local optimal solution for the objective function.…”
Section: A Formation Of Retail Zones Using K-means Clusteringmentioning
confidence: 99%