Proceedings of the Eighteenth Annual Symposium on Computational Geometry 2002
DOI: 10.1145/513400.513402
|View full text |Cite
|
Sign up to set email alerts
|

A local search approximation algorithm for k-means clustering

Abstract: In k-means clustering we are given a set of n data points in d-dimensional space d and an integer k, and the problem is to determine a set of k points in d , called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the very high constant factors involved. There are many heuristics that are used in pra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
86
0
1

Year Published

2009
2009
2021
2021

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 211 publications
(87 citation statements)
references
References 21 publications
0
86
0
1
Order By: Relevance
“…where μ= 10 5 , λ= 0.20max(| a |, | b |), L min = 0.15 max( L ) and L max = 0.95 max( L ). For this initial segmentation, we use the k‐means implementation from Kanungo et al [KMN*04]. Gehler and colleagues [GRK*11] also used k‐means for their global sparse reflectance prior, which along with their shading prior and their gradient consistency term, fit into their global optimization system.…”
Section: Algorithmmentioning
confidence: 99%
“…where μ= 10 5 , λ= 0.20max(| a |, | b |), L min = 0.15 max( L ) and L max = 0.95 max( L ). For this initial segmentation, we use the k‐means implementation from Kanungo et al [KMN*04]. Gehler and colleagues [GRK*11] also used k‐means for their global sparse reflectance prior, which along with their shading prior and their gradient consistency term, fit into their global optimization system.…”
Section: Algorithmmentioning
confidence: 99%
“…Ideally, if the clustering algorithm converges to a local minimum closer to the global minimum, then each cluster is relatively denser and we may reduce the number of distance calculations in the searching stage. Some k -means clustering algorithms [2] [15] have shown to be able to produce better results than Lloyd’s algorithm, but for this paper, we already achieve excellent performance by using the naive Lloyd’s algorithm. It will be interesting to see how performance can be further improved by integrating these algorithms.…”
Section: Methodsmentioning
confidence: 83%
“…To prove the correctness of our new and novel k-means algorithm, we will need the following definitions from Kumar et al [23] and Kanungo et al [24].…”
Section: Methodsmentioning
confidence: 99%