2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01153
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Overfitting of Deep Generative Networks via Latent Recovery

Abstract: State of the art deep generative networks are capable of producing images with such incredible realism that they can be suspected of memorizing training images. It is why it is not uncommon to include visualizations of training set nearest neighbors, to suggest generated images are not simply memorized. We demonstrate this is not sufficient and motivates the need to study memorization/overfitting of deep generators with more scrutiny. This paper addresses this question by i) showing how simple losses are highl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
61
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(62 citation statements)
references
References 25 publications
1
61
0
Order By: Relevance
“…Considering the role played by ImageNet Large Scale Visual Recognition Challenge [12] (ILSVRC) at the advancement of CNNs for image classification models, the importance of an evaluation metric can be understood better. In this respect, many different evaluation metrics have been proposed so far by the researchers [9], including Recovery Error [13], IS [5], FID [6], and Kernel Inception Distance (KID) [14]. Although the IS and FID gained gradually increasing reputations and are commonly used, unfortunately both of them fail when it comes to determine whether the proposed GAN model is overfitting or underfitting.…”
Section: Preliminariesmentioning
confidence: 99%
See 3 more Smart Citations
“…Considering the role played by ImageNet Large Scale Visual Recognition Challenge [12] (ILSVRC) at the advancement of CNNs for image classification models, the importance of an evaluation metric can be understood better. In this respect, many different evaluation metrics have been proposed so far by the researchers [9], including Recovery Error [13], IS [5], FID [6], and Kernel Inception Distance (KID) [14]. Although the IS and FID gained gradually increasing reputations and are commonly used, unfortunately both of them fail when it comes to determine whether the proposed GAN model is overfitting or underfitting.…”
Section: Preliminariesmentioning
confidence: 99%
“…In order to detect overfitting, in [13], using recovery error was proposed. By optimizing the random input vector z , a GAN model tries to generate mimic of each image in the training set and validation set, and obtains recovery error distributions between training set-generated images (mimic of training set) and validation setgenerated images (mimic of validation set).…”
Section: Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations
“…Our model has been implemented on TensorFlow 1.14.0 version, CUDA Toolkit version 10. [36]. MRE defines the distribution of error into a single value.…”
Section: B Training Details and Loss Functionmentioning
confidence: 99%