Lecture Notes in Computer Science
DOI: 10.1007/978-3-540-75488-6_5
|View full text |Cite
|
Sign up to set email alerts
|

A Hilbert Space Embedding for Distributions

Abstract: Abstract. We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in two-sample tests, which are used for determining whether two sets of observations arise from the same distribution, covariate shift correction, local learning, measures of independence, and density estimation.Kernel methods are widely used in supervis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
493
0
8

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 339 publications
(502 citation statements)
references
References 25 publications
(28 reference statements)
1
493
0
8
Order By: Relevance
“…Proposed scheme: Inspired by Ref. [28], probabilistic distributions can be embedded in RKHS. The center of the HED are the mean mapping functions:…”
Section: Adaptive Threshold For Change Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Proposed scheme: Inspired by Ref. [28], probabilistic distributions can be embedded in RKHS. The center of the HED are the mean mapping functions:…”
Section: Adaptive Threshold For Change Detectionmentioning
confidence: 99%
“…As long as the rademacher average [29], which measures the ''size'' of a class of real-valued functions with respect to a probability distribution, is well behaved, finite sample yield error converges to zero, thus they empirically approximate lðP x Þ(see Ref. [28] for more details).…”
Section: Adaptive Threshold For Change Detectionmentioning
confidence: 99%
“…In terms of the Hilbert space embedding, the density function estimate results from the inner product of the mapped point φ(u) with the mean of the distribution μ[P u ]. The mean map μ : Smola et al (2007), and allows for the definition of a similarity measure between two sampled sets P u and P v , sampled from the same or two different distributions. The measure is defined to be D (P u , P v …”
Section: Maximum Nean Discrepancymentioning
confidence: 99%
“…This makes the similarity measures, such as Kullback-Leibler divergence and Bhattacharyya coefficient, computationally unstable, Yang & R. Duraiswami (2005). Additionally, these techniques require sophisticated space partitioning and/or bias correction strategies Smola et al (2007).…”
Section: Visual Tracking Through Density Comparisonmentioning
confidence: 99%
“…Next, consider the measure based on reproducing kernel Hilbert space (RKHS) [34]. It has been shown that if a kernel κ is characteristic then there exist a unique and one to one mapping between the space of probability density functions and the mean operator µ in the RKHS.…”
Section: E Equivalence Of Several Measures Of Multivariate Dependencementioning
confidence: 99%