2015
DOI: 10.48550/arxiv.1503.00323
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sparse Approximation of a Kernel Mean

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…Cortes and Scott [12] provide another approach to the sparse kernel mean problem. They run Gonzalez's algorithms [21] for k-center on the points P ∈ R d (iteratively add points to Q, always choosing the furthest point from any in Q) and terminate when the furthest distance to the nearest point in Q is Θ(ε).…”
Section: Known Results On Kde Coresetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Cortes and Scott [12] provide another approach to the sparse kernel mean problem. They run Gonzalez's algorithms [21] for k-center on the points P ∈ R d (iteratively add points to Q, always choosing the furthest point from any in Q) and terminate when the furthest distance to the nearest point in Q is Θ(ε).…”
Section: Known Results On Kde Coresetsmentioning
confidence: 99%
“…Restrictions Algorithm Joshi et al [28] d/ε 2 bounded VC random sample Fasy et al [17] (d/ε 2 ) log(d∆/ε) Lipschitz random sample Lopaz-Paz et al [31] 1/ε 2 characteristic kernels random sample Chen et al [10] 1/(εr P ) characteristic kernels iterative Bach et al [3] (1/r 2 P ) log(1/ε) characteristic kernels iterative Bach et al [3] 1/ε 2 characteristic kernels, weighted iterative Lacsote-Julien et al [29] 1/ε 2 characteristic kernels iterative Harvey and Samadi [23] (1/ε) √ n log 2.5 (n) characteristic kernels iterative Cortez and Scott [12] k 0 (≤ (∆/ε) d ) Lipschitz; d is constant iterative Phillips [37] (1/ε)…”
Section: Background On Kernels and Related Coresetsmentioning
confidence: 99%
“…Runtime Restrictions Joshi et al [21] (1/ε 2 )(d + log(1/δ)) |Q| samples centrally symmetric, positive Fasy et al [9] (d/ε 2 ) log(d∆/εδ) |Q| samples .. Gretton et al [15] (1/ε 4 ) log(1/δ) |Q| samples characteristic kernels Phillips [27] (1/εσ) 2d/(d+2) log d/(d+2) (1/εσ) n/ε 2 (1/σ)-Lipschitz, d is constant Phillips [27] 1/ε n log n d = 1 Chen et al [4] 1/(εr P ) n/(εr P ) characteristic kernels Bach et al [2] (1/r 2 P ) log(1/ε) n log(1/ε)/r 2 P characteristic kernels Bach et al [2] 1/ε 2 n/ε 2 characteristic kernels, weighted Harvey and Samadi [17] (1/ε) √ n log 2.5 (n) poly(n, 1/ε, d) characteristic kernels Cortez and Scott [6] k…”
Section: Paper Coreset Sizementioning
confidence: 99%
“…Cortes and Scott [6] provide another approach to the sparse kernel mean problem. They run Gonzalez's algorithms [14] for k-center on the points P ∈ R d (iteratively add points to Q, always choosing the furthest point from any in Q) and terminate when the furthest distance to the nearest point in Q is Θ(ε).…”
Section: Paper Coreset Sizementioning
confidence: 99%
“…Some attention has focused on calculating the kernel density estimates given by a large underlying training dataset [15,22]. As an active area of computer vision, there as been a particular interest on two-dimensional problems [4,20,23]. These often utilize some variation of the fast Gauss transform of Greengard and Strain [9].…”
Section: Prior Workmentioning
confidence: 99%