2019
DOI: 10.1007/978-981-13-3600-3_3
|View full text |Cite
|
Sign up to set email alerts
|

Initial Centroids for K-Means Using Nearest Neighbors and Feature Means

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…But, execution time was trivial for the dataset experimented by authors. M. A. Lakshmi et al [13] have proposed a method to find initial centroids by using the nearest neighbor method. They compared their idea by using SSE(Sum of the Squared Differences) with random and kmeans++ initial selection.…”
Section: Related Workmentioning
confidence: 99%
“…But, execution time was trivial for the dataset experimented by authors. M. A. Lakshmi et al [13] have proposed a method to find initial centroids by using the nearest neighbor method. They compared their idea by using SSE(Sum of the Squared Differences) with random and kmeans++ initial selection.…”
Section: Related Workmentioning
confidence: 99%
“…The centroid method maximized the joint probability of pixel position and detected threshold distance for cluster data arrangements. Lakshmi et al 61 developed a KM initialization algorithm using feature‐mean and nearest neighbors (FMNN) approach. The FMNN algorithm initially computes the mean of each dimension and then discards the N/K nearest neighbor data points for initial centroid detection.…”
Section: Related Workmentioning
confidence: 99%
“…This study compared the proposed algorithm to the standard random KM, 22 KM++, 40,41 ADV, 66,98 MKM, 49 Mean‐KM, 51 NFD, 54 K‐MAM, 56 NRKM2, 58 FMNN, 61 and MuKM 63 algorithms. These algorithms achieve better efficiency, effectiveness, stable results and convergence with nondeterministic and nondensity behavior.…”
Section: Experimental Analysismentioning
confidence: 99%
“…Many literature have solved the initialization problem of k-means clustering algorithm [11,14,25,27,33,42]. For example, Duan et al developed calculating the density to select the initial centroids [14].…”
Section: Introductionmentioning
confidence: 99%
“…For example, Duan et al developed calculating the density to select the initial centroids [14]. Lakshmi et al proposed to use nearest neighbors and feature means to decide the initial centroids [25]. Meanwhile, many researches addressed the similarity problem of k-means clustering algorithm [4,34,37,39,40,54].…”
Section: Introductionmentioning
confidence: 99%