2020
DOI: 10.48550/arxiv.2006.08172
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Faster Wasserstein Distance Estimation with the Sinkhorn Divergence

Lenaic Chizat,
Pierre Roussillon,
Flavien Léger
et al.

Abstract: The squared Wasserstein distance is a natural quantity to compare probability distributions in a non-parametric setting. This quantity is usually estimated with the plug-in estimator, defined via a discrete optimal transport problem. It can be solved to ε-accuracy by adding an entropic regularization of order ε and using for instance Sinkhorn's algorithm. In this work, we propose instead to estimate it with the Sinkhorn divergence, which is also built on entropic regularization but includes debiasing terms. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…The performance of other algorithms can be further investigated. Finally, it is possible to reduce the computational time for our method by parallel computing, faster Monte Carlo sampling methods (Rubinstein and Kroese 2016) and more efficient computation algorithms of Wasserstein distances (Chizat et al 2020), which we leave for future research. ).…”
Section: Discussionmentioning
confidence: 99%
“…The performance of other algorithms can be further investigated. Finally, it is possible to reduce the computational time for our method by parallel computing, faster Monte Carlo sampling methods (Rubinstein and Kroese 2016) and more efficient computation algorithms of Wasserstein distances (Chizat et al 2020), which we leave for future research. ).…”
Section: Discussionmentioning
confidence: 99%
“…For more information on I 0 (µ, ν) see [14]. We remark that when α ≥ 2 then under A4,A5,A6, it has been shown I 0 (µ, ν) ≤ C for a positive constant C [14].…”
Section: Properties Of the Entropic Mapmentioning
confidence: 99%
“…For more information on I 0 (µ, ν) see [14]. We remark that when α ≥ 2 then under A4,A5,A6, it has been shown I 0 (µ, ν) ≤ C for a positive constant C [14]. The notation a b means that there exists positive constants c, C such that ca ≤ b ≤ Ca and a b means that there is a positive constant C such that a ≤ Cb.…”
Section: Properties Of the Entropic Mapmentioning
confidence: 99%
“…The approach taken in this paper and the techniques used for its analysis bear various connections to recent developments in the literature on optimal transport, e.g., on the estimation of (smooth) optimal transport maps [40,41,42,43,44]. Key steps in our proofs are based on adaptations of parts of the analysis in [42,43,44].…”
Section: Introductionmentioning
confidence: 99%