2020
DOI: 10.1137/18m1209854
|View full text |Cite
|
Sign up to set email alerts
|

Turning Big Data Into Tiny Data: Constant-Size Coresets for $k$-Means, PCA, and Projective Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 77 publications
(45 citation statements)
references
References 27 publications
0
45
0
Order By: Relevance
“…This allows us to handle regularization terms or costs that are common in machine learning such as to compute X that minimizes cost f (P, X) + |X| over every set X of |X| ≤ k centers (P. K. Agarwal & Mustafa, 2004;Bachem, Lucic, & Krause, 2015). Same property occurs for other model parameters such as j-subspaces; (Feldman, Schmidt, & Sohler, 2013a).…”
Section: Why Coresets?mentioning
confidence: 67%
See 4 more Smart Citations
“…This allows us to handle regularization terms or costs that are common in machine learning such as to compute X that minimizes cost f (P, X) + |X| over every set X of |X| ≤ k centers (P. K. Agarwal & Mustafa, 2004;Bachem, Lucic, & Krause, 2015). Same property occurs for other model parameters such as j-subspaces; (Feldman, Schmidt, & Sohler, 2013a).…”
Section: Why Coresets?mentioning
confidence: 67%
“…For example, the k-means problem is NP-hard when k is part of the input (Mahajan, Nimbhorkar, & Varadarajan, 2009). However, composable coresets of near-linear size in (k/ε) can be used to produce 1 ± ε multiplicative factor approximation in O(ndk) time, even for streaming distributed data in parallel (Braverman et al, 2016;Feldman, Schmidt, & Sohler, 2013a).…”
Section: Why Coresets?mentioning
confidence: 99%
See 3 more Smart Citations