2021
DOI: 10.48550/arxiv.2106.01954
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark

Abstract: Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport-specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use inputconvex neural networks (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 24 publications
(59 reference statements)
0
9
0
Order By: Relevance
“…Neural optimal transport: Following the benchmark results of the neural OT algorithms benchmark [24], we choose two methods to apply DA: W2GN [23] and MM:R [41,24]. We used Dense ICNN [23] with three hidden layers [64,64,32] as potentials φ and ψ in for W2GN and MM:R neural OT methods.…”
Section: Baselinesmentioning
confidence: 99%
“…Neural optimal transport: Following the benchmark results of the neural OT algorithms benchmark [24], we choose two methods to apply DA: W2GN [23] and MM:R [41,24]. We used Dense ICNN [23] with three hidden layers [64,64,32] as potentials φ and ψ in for W2GN and MM:R neural OT methods.…”
Section: Baselinesmentioning
confidence: 99%
“…Two popular examples of OT costs are the Wasserstein-1 (W 1 [8,30]) and the (square of) Wasserstein-2 (W 2 [46,40,64]) distances which use c(x, y) = x − y and c(x, y) = 1 2 x − y 2 , respectively. Weak OT.…”
Section: Background On Optimal Transportmentioning
confidence: 99%
“…Optimal transport (OT) is a powerful framework to solve mass-moving and generative modeling problems for data distributions. Recent works [42,64,40,19,17] propose scalable neural methods to compute OT plans (or maps). They show that the learned transport plan (or map) can be used directly as the generative model in data synthesis [64] and unpaired learning [42,64,17,23].…”
Section: Introductionmentioning
confidence: 99%
“…OT maps in high-dimensional ambient spaces, e.g., natural images, are usually not considered. Recent evaluation of continuous OT methods for W 2 (Korotin et al, 2021b) reveals their crucial limitations, which negatively affect their scalability, such as poor expressiveness of ICNN architectures or bias due to regularization.…”
Section: Optimal Transport In Generative Modelsmentioning
confidence: 99%
“…Issues with nonuniqueness of solution of (12) were softened, but using ICNNs to parametrize ψ became necessary. Korotin et al (2021b) demonstrated that ICNNs negatively affect practical performance of OT and tested an unconstrained formulation similar to (11). As per the evaluation, it provided the best empirical performance (Korotin et al, 2021b, 4.5).…”
Section: Equal Dimensions Of Input and Output Distributionsmentioning
confidence: 99%