2018
DOI: 10.48550/arxiv.1801.01401
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Demystifying MMD GANs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
189
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 138 publications
(208 citation statements)
references
References 0 publications
2
189
0
Order By: Relevance
“…Metrics. We evaluate our methods on FID [15], KID [3] for image quality. To assess the overfitting of the methods, we further evaluate them on latent recovery [54], density and coverage [39] metrics.…”
Section: Results Of Slim-fastgansmentioning
confidence: 99%
“…Metrics. We evaluate our methods on FID [15], KID [3] for image quality. To assess the overfitting of the methods, we further evaluate them on latent recovery [54], density and coverage [39] metrics.…”
Section: Results Of Slim-fastgansmentioning
confidence: 99%
“…For ViT discriminator, they use embedding feature loss calculated using positional and patch features from the transformer encoder layers by successfully inserting the real and synthesized FA images. Their quantitative results in terms of Frechet inception Distance [320] and Kernel Inception Distance [321] demonstrate the superiority of their approach over baseline methods on diabetic retinopathy dataset provided by Hajeb et al [322].…”
Section: Semi-supervised Methodsmentioning
confidence: 92%
“…Evaluating generative models in any domain is a notoriously difficult task (Theis et al, 2016). However, researchers have found success through the use of sample-based evaluation metrics that estimate the distance ρ between real and generated distributions P r and P g by drawing random samples (Heusel et al, 2017;You et al, 2018;Bińkowski et al, 2018). That is, they compute ρ(S g , S r ) ≈ ρ(P g , P r ), with S r = {x r 1 , .…”
Section: Background and Related Workmentioning
confidence: 99%
“…MMD (Gretton et al, 2006) (Equation 1) can also be used to measure the dissimilarity between graph embedding distributions. The original Kernel Inception Distance (KID) (Bińkowski et al, 2018) proposed a polynomial kernel with MMD, k(x i , x j ) = ( 1 d x i x j + 1) 3 , where d is the embedding dimension. The linear kernel k(x i , x j ) = x i • x j is another parameter-free kernel used with MMD to evaluate generative models (O'Bray et al, 2021).…”
Section: Neural-network-based Metricsmentioning
confidence: 99%
See 1 more Smart Citation