2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854602
|View full text |Cite
|
Sign up to set email alerts
|

Scalable sparse approximation of a sample mean

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…The former has been studied extensively in the literature. For example, Cruz Cortés and Scott (2014) considers the problem of approximating the kernel mean as a sparse linear combination of the sample.…”
Section: Approximating the Kernel Meanmentioning
confidence: 99%
See 1 more Smart Citation
“…The former has been studied extensively in the literature. For example, Cruz Cortés and Scott (2014) considers the problem of approximating the kernel mean as a sparse linear combination of the sample.…”
Section: Approximating the Kernel Meanmentioning
confidence: 99%
“…The advances along this direction will benefit the development of algorithms using kernel mean embedding. In the context of MMD, many recent studies have addressed this issue (Zaremba et al 2013, Cruz Cortés and Scott 2014, Ji Zhao 2015, Chwialkowski et al 2015.…”
Section: Scalabilitymentioning
confidence: 99%
“…. , x n , a kernel φ, and a target sparsity k, we seek a sparse kernel mean (2) that accurately approximates the kernel mean (1). This problem is motivated by applications where n is so large that evaluation or manipulation of the full kernel mean is computationally prohibitive.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, Section 7 applies our methodology in three different machine learning problems that rely on large-scale KDEs and KMEs, and demonstrates the efficacy of our approach. A preliminary version of this work appeared in [1]. A Matlab implementation of our algorithm is available at [2].…”
Section: Introductionmentioning
confidence: 99%
“…The first category tries to find a smaller subset of samples which approximate well to the original samples. For instance, a sparse linear combination of samples can approximate the kernel mean [Cortes and Scott, 2014]. Sparsity-inducing norm can also be imposed on coefficients of kernel mean [Muandet et al, 2014].…”
Section: Approximating the Kernel Mean Embeddingmentioning
confidence: 99%