2011
DOI: 10.1016/j.patrec.2011.07.011
|View full text |Cite
|
Sign up to set email alerts
|

A new algorithm for initial cluster centers in k-means algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
71
0
1

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 168 publications
(85 citation statements)
references
References 5 publications
0
71
0
1
Order By: Relevance
“…The maximum and the average FM indices produced by our method are higher than these of the other three algorithms when the number of the clusters reaches 10 and then 20. And it is noticed that the increase of the speed of our method is lower than the increase of the speed of Murat Erisoglu's method [4] for this group of data sets. Murat Erisoglu's method has best result when the number of the clusters is 5.…”
Section: B Experiments On Several Data Setsmentioning
confidence: 68%
See 2 more Smart Citations
“…The maximum and the average FM indices produced by our method are higher than these of the other three algorithms when the number of the clusters reaches 10 and then 20. And it is noticed that the increase of the speed of our method is lower than the increase of the speed of Murat Erisoglu's method [4] for this group of data sets. Murat Erisoglu's method has best result when the number of the clusters is 5.…”
Section: B Experiments On Several Data Setsmentioning
confidence: 68%
“…The traditional kmeans method chooses initial cluster centers arbitrarily, which may affect its accuracy in the clustering [ [3], [4]]. …”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, using standard KMeans algorithm, certain objects that represent same rank may not be grouped together because each objects is measured based on closest distance to the centroids rather than among themselves. As clustering result is influenced by initial centroids selection [11], they need also to be targeted towards relevant points of rankingbased cluster representation. Many suggestions have been made on initial centroids enhancement.…”
Section: Related Workmentioning
confidence: 99%
“…Each clustering algorithm has been applied with 9 different parameterizations. We consider 3 kernels (cosine distance, Euclidean distance and Mahalanobis distance) and 3 cluster center initialization methods (random, cumulative approach 26 and subtractive clustering 27 ).…”
Section: Construction Of H-fuzzy Partitionsmentioning
confidence: 99%