2021
DOI: 10.1214/21-ecp383
|View full text |Cite
|
Sign up to set email alerts
|

Strong equivalence between metrics of Wasserstein type

Abstract: The sliced Wasserstein metric Wp and more recently max-sliced Wasserstein metric Wp have attracted abundant attention in data sciences and machine learning due to their advantages to tackle the curse of dimensionality, see e.g. [15], [6]. A question of particular importance is the strong equivalence between these projected Wasserstein metrics and the (classical) Wasserstein metric Wp. Recently, Paty and Cuturi have proved in [14] the strong equivalence of W2 and W2. We show that the strong equivalence also hol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…h ≥ 0 (RBF kernels are positive definite). We conclude that k − k is positive definite, hence (23) holds for RBF kernels.…”
Section: A3 Proof Of Theoremmentioning
confidence: 67%
See 2 more Smart Citations
“…h ≥ 0 (RBF kernels are positive definite). We conclude that k − k is positive definite, hence (23) holds for RBF kernels.…”
Section: A3 Proof Of Theoremmentioning
confidence: 67%
“…We conclude that (23) holds with F defined as the unit ball of the RKHS associated with the linear kernel k(t i , t j ) = t i t j for t i , t j ∈ R, and F being the unit ball of the RKHS associated with the rescaled linear kernel k(x i , x j ) = x i x j /d for x i , x j ∈ R d .…”
Section: A3 Proof Of Theoremmentioning
confidence: 80%
See 1 more Smart Citation
“…The sliced distances W p and W p are metrics on P p (R d ) and, in fact, induce the same topology as W p [BG21].…”
Section: Sliced Wasserstein Distancesmentioning
confidence: 99%
“…Besides, SW has been shown to offer nice theoretical properties as well. Indeed, it satisfies the metric axioms [13], the estimators obtained by minimizing SW are asymptotically consistent [14], the convergence in SW is equivalent to the convergence in Wasserstein [14,15], and even though the sample complexity of Wasserstein grows exponentially with the data dimension [16,17,18], the sample complexity of SW does not depend on the dimension [19]. However, the latter study also demonstrated with a theoretical error bound, that the quality of the Monte Carlo estimate of SW depends on the number of projections and the variance of the one-dimensional Wasserstein distances [19,Theorem 6].…”
Section: Introductionmentioning
confidence: 99%