2017
DOI: 10.1109/tsp.2016.2628353
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Approximation of a Kernel Mean

Abstract: Kernel means are frequently used to represent probability distributions in machine learning problems. In particular, the well known kernel density estimator and the kernel mean embedding both have the form of a kernel mean. Unfortunately, kernel means are faced with scalability issues. A single point evaluation of the kernel density estimator, for example, requires a computation time linear in the training sample size. To address this challenge, we present a method to efficiently construct a sparse approximati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…Cortes and Scott [12] provide another approach to the sparse kernel mean problem. They run Gonzalez's algorithms [21] for k-center on the points P ∈ R d (iteratively add points to Q, always choosing the furthest point from any in Q) and terminate when the furthest distance to the nearest point in Q is Θ(ε).…”
Section: Known Results On Kde Coresetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Cortes and Scott [12] provide another approach to the sparse kernel mean problem. They run Gonzalez's algorithms [21] for k-center on the points P ∈ R d (iteratively add points to Q, always choosing the furthest point from any in Q) and terminate when the furthest distance to the nearest point in Q is Θ(ε).…”
Section: Known Results On Kde Coresetsmentioning
confidence: 99%
“…Restrictions Algorithm Joshi et al [28] d/ε 2 bounded VC random sample Fasy et al [17] (d/ε 2 ) log(d∆/ε) Lipschitz random sample Lopaz-Paz et al [31] 1/ε 2 characteristic kernels random sample Chen et al [10] 1/(εr P ) characteristic kernels iterative Bach et al [3] (1/r 2 P ) log(1/ε) characteristic kernels iterative Bach et al [3] 1/ε 2 characteristic kernels, weighted iterative Lacsote-Julien et al [29] 1/ε 2 characteristic kernels iterative Harvey and Samadi [23] (1/ε) √ n log 2.5 (n) characteristic kernels iterative Cortez and Scott [12] k [37] Θ(1/ε) d = 1 sorting Table 1: Asymptotic ε-KDE coreset sizes in terms of error ε and dimension d.…”
Section: Coreset Sizementioning
confidence: 99%
“…1/ε 2 n/ε 2 characteristic kernels, weighted Harvey and Samadi [17] (1/ε) √ n log 2.5 (n) poly(n, 1/ε, d) characteristic kernels Cortez and Scott [6] k Sampling bounds. Joshi et al [21] showed that a random sample of size O((1/ε 2 )(d+log(1/δ))) results in an ε-kernel coreset for any centrally symmetric, non-increasing kernel.…”
Section: Coreset Sizementioning
confidence: 99%
“…Cortes and Scott [6] provide another approach to the sparse kernel mean problem. They run Gonzalez's algorithms [14] for k-center on the points P ∈ R d (iteratively add points to Q, always choosing the furthest point from any in Q) and terminate when the furthest distance to the nearest point in Q is Θ(ε).…”
Section: Coreset Sizementioning
confidence: 99%
“…Since adaptive sampling is often expensive, most methods generate a sketch that will work for any query. Sampling procedures based on herding [7] k-centers [11], and a variety of other cluster-based approaches are available in the literature. These methods operate offline since efficient adaptive sampling on streaming data is a challenging problem.…”
Section: Related Workmentioning
confidence: 99%