2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) 2016
DOI: 10.1109/focs.2016.46
|View full text |Cite
|
Sign up to set email alerts
|

Local Search Yields Approximation Schemes for k-Means and k-Median in Euclidean and Minor-Free Metrics

Abstract: We give the first polynomial-time approximation schemes (PTASs) for the following problems: (1) uniform facility location in edge-weighted planar graphs; (2) k-median and k-means in edge-weighted planar graphs; (3) k-means in Euclidean space of bounded dimension. Our first and second results extend to minor-closed families of graphs. All our results extend to cost functions that are the p-th power of the shortest-path distance. The algorithm is local search where the local neighborhood of a solution S consists… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
84
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 69 publications
(88 citation statements)
references
References 42 publications
1
84
0
Order By: Relevance
“…Choosing C as the output of a poly-time constant factor approximation algorithm for k-means, e.g. [KMN + 04, ANFSW17], or since d = 2, a PTAS for k-means [FRS16,CAKM16], gives the competitive ratio bound O(log k log log 2 (2k)).…”
Section: Building a Decision Treementioning
confidence: 99%
“…Choosing C as the output of a poly-time constant factor approximation algorithm for k-means, e.g. [KMN + 04, ANFSW17], or since d = 2, a PTAS for k-means [FRS16,CAKM16], gives the competitive ratio bound O(log k log log 2 (2k)).…”
Section: Building a Decision Treementioning
confidence: 99%
“…In terms of their computational complexity, these problems are hard to approximate within a factor better than 1.1 in high-dimensional Euclidean spaces and admits approximation schemes in low-dimension [1,17,7]. On the other hand, they admit constant-factor approximation algorithms for high-dimensional Euclidean spaces, better than for general metric spaces [9]. Due to hardness results, constant-factor approximation factors are not achievable for the explainable clustering formulation.…”
Section: Other Related Workmentioning
confidence: 99%
“…K-means is an unsupervised learning algorithm, which means the data to be processed has no labels [31]. It divides data points into k clusters so that each point belongs to the cluster corresponding to his nearest mean [32].…”
Section: Hitl Based K-means Clustering For Ev Driver Behaviormentioning
confidence: 99%