2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00596
|View full text |Cite
|
Sign up to set email alerts
|

Non-Adversarial Image Synthesis With Generative Latent Nearest Neighbors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
32
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 39 publications
(32 citation statements)
references
References 7 publications
0
32
0
Order By: Relevance
“…See Section 4.5, Table 2, and Figure 3 for the advantageous performance of our IMLE-GAN. Several follow-up works, e.g., GLANN [20] and conditional IMLE [31], also validate the improved effectiveness of IMLE framework in a reconstructive generation context.…”
Section: Reconstructive Generation: Imlementioning
confidence: 66%
“…See Section 4.5, Table 2, and Figure 3 for the advantageous performance of our IMLE-GAN. Several follow-up works, e.g., GLANN [20] and conditional IMLE [31], also validate the improved effectiveness of IMLE framework in a reconstructive generation context.…”
Section: Reconstructive Generation: Imlementioning
confidence: 66%
“…1 for an example) and/or vanishing generator gradients due to discriminator being way better in distinguishing fake samples and real samples [1]. Non-adversarial approaches [4,11,16] have recently been explored to tackle these challenges. For example, Generative Latent Optimization (GLO) [4] and Generative Latent Nearest Neighbor (GLANN) [11] investigate the importance of inductive bias in convolutional networks by disconnecting the discriminator for a non-adversarial learning protocol of GANs.…”
Section: Mocogan (Adversarial)mentioning
confidence: 99%
“…Non-adversarial approaches [4,11,16] have recently been explored to tackle these challenges. For example, Generative Latent Optimization (GLO) [4] and Generative Latent Nearest Neighbor (GLANN) [11] investigate the importance of inductive bias in convolutional networks by disconnecting the discriminator for a non-adversarial learning protocol of GANs. These works show that without a discriminator, a generator can be learned to map the training images in the given data distribution to a lower dimensional latent space that is learned in conjunction with the weights of the generative network.…”
Section: Mocogan (Adversarial)mentioning
confidence: 99%
See 1 more Smart Citation
“…We can think about techniques of few-shot learning [224] (to train models from a few examples), generative adversarial networks [225] or non-adversarial generative models [226] (for example, to generate data when these are scarce), or Siamese networks [227] (to establish if two images provided as input belong to the same class). Similarly, the use of 3D ConvNets [228] to directly process three-dimensional information is non-existent, as well as the combined use of recurrent and convolutional networks [229] to carry out more complex tasks (such as the textual description of images automatically, i.e., image captioning).…”
mentioning
confidence: 99%