2019
DOI: 10.48550/arxiv.1909.07862
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Minimax Confidence Intervals for the Sliced Wasserstein Distance

Tudor Manole,
Sivaraman Balakrishnan,
Larry Wasserman

Abstract: The Wasserstein distance has risen in popularity in the statistics and machine learning communities as a useful metric for comparing probability distributions. We study the problem of uncertainty quantification for the Sliced Wasserstein distance-an easily computable approximation of the Wasserstein distance. Specifically, we construct confidence intervals for the Sliced Wasserstein distance which have finite-sample validity under no assumptions or mild moment assumptions, and are adaptive in length to the smo… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Faster rates of convergence for estimating optimal transport costs are achievable under strong conditions on X and Y. For instance, the bound ∆ n (c) n −1/2 is known to hold when c is a metric raised to a power p ≥ 1, and when X and Y are one-dimensional (Munk and Czado, 1998;Freitag and Munk, 2005;del Barrio et al, 2019b;Manole et al, 2019) or countable (Sommerfeld and Munk, 2018;Tameling et al, 2019). In both of these cases, the corresponding empirical p-Wasserstein distance is known to exhibit distinct convergence rates depending on whether µ and ν are vanishingly close or not, similarly as our findings in equation ( 5).…”
Section: Related Workmentioning
confidence: 99%
“…Faster rates of convergence for estimating optimal transport costs are achievable under strong conditions on X and Y. For instance, the bound ∆ n (c) n −1/2 is known to hold when c is a metric raised to a power p ≥ 1, and when X and Y are one-dimensional (Munk and Czado, 1998;Freitag and Munk, 2005;del Barrio et al, 2019b;Manole et al, 2019) or countable (Sommerfeld and Munk, 2018;Tameling et al, 2019). In both of these cases, the corresponding empirical p-Wasserstein distance is known to exhibit distinct convergence rates depending on whether µ and ν are vanishingly close or not, similarly as our findings in equation ( 5).…”
Section: Related Workmentioning
confidence: 99%
“…In front of the difficulty to estimate W 2 2 (µ, ν), researchers have also turned their attention to similar but more tractable discrepancy measures such as the sliced Wasserstein distance [47] or the Sinkhorn divergence [48], which can be both estimated at the parametric rate [25,38,37,40]. However, there is "no free lunch" and unconditional statistical efficiency comes at the price of lack of adaptivity and discriminative power.…”
Section: Introductionmentioning
confidence: 99%