2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01090
|View full text |Cite
|
Sign up to set email alerts
|

Max-Sliced Wasserstein Distance and Its Use for GANs

Abstract: Generative adversarial nets (GANs) and variational auto-encoders have significantly improved our distribution modeling capabilities, showing promise for dataset augmentation, image-to-image translation and feature learning. However, to model high-dimensional distributions, sequential training and stacked architectures are common, increasing the number of tunable hyper-parameters as well as the training time. Nonetheless, the sample complexity of the distance metrics remains one of the factors affecting GAN tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
104
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 90 publications
(107 citation statements)
references
References 26 publications
2
104
0
Order By: Relevance
“…To tackle the issue of complexity, a sliced version of the Wasserstein distance was studied and employed, which only requires estimating distances of projected uni-dimensional distributions and is, therefore, more efficient, see e.g. [3], [6], [13].…”
Section: Projected Wasserstein Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…To tackle the issue of complexity, a sliced version of the Wasserstein distance was studied and employed, which only requires estimating distances of projected uni-dimensional distributions and is, therefore, more efficient, see e.g. [3], [6], [13].…”
Section: Projected Wasserstein Metricsmentioning
confidence: 99%
“…[15], [3], [18]. Very recently, in order to reduce the projection complexity of the sliced Wasserstein, [6] introduced the so-called max-Wasserstein metrics, which we will denote by W p , as a fix. The same paper also points out that both of these projected versions of the Wasserstein distance enjoy the so-called generalizability over the Wasserstein metric.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An alternative principle for approximating the Wasserstein distance comes from Radon transform: to project high-dimensional distribution to one-dimensional distributions. One representative example is the sliced Wasserstein distance [8,23,14,24], which is defined as the average Wasserstein distance obtained between random one dimensional projections. In other words, the sliced Wasserstein distance is calculated via linear slicing of the probability distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Its important extensions, such as [34,31], are proposed recently to search for the k-dimensional subspace that would maximize the Wasserstein distance between two measures after projection. The sample complexity of such estimators is investigated [15,14] between two measures and their empirical counterparts.…”
Section: Introductionmentioning
confidence: 99%