The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.1609/aaai.v34i04.5852
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Explorative Sampling Considering the Generative Boundaries of Deep Generative Neural Networks

Abstract: Deep generative neural networks (DGNNs) have achieved realistic and high-quality data generation. In particular, the adversarial training scheme has been applied to many DGNNs and has exhibited powerful performance. Despite of recent advances in generative networks, identifying the image generation mechanism still remains challenging. In this paper, we present an explorative sampling algorithm to analyze generation mechanism of DGNNs. Our method efficiently obtains samples with identical attributes from a quer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 20 publications
1
4
0
Order By: Relevance
“…For example, in CNNs, decision regions are divided by polyhedral cones (Carlsson 2019) so that the angular difference between feature maps becomes highly related to the Configuration distance. This aligns with the empirical successes of prior work using the Cosine similarity in the feature space (Fong and Vedaldi 2018;Kim et al 2018;Bachman, Hjelm, and Buchwalter 2019;Jeon, Jeong, and Choi 2020). We plan to explore this phenomenon further in our future work.…”
Section: Analysis For the Distance Metricssupporting
confidence: 77%
“…For example, in CNNs, decision regions are divided by polyhedral cones (Carlsson 2019) so that the angular difference between feature maps becomes highly related to the Configuration distance. This aligns with the empirical successes of prior work using the Cosine similarity in the feature space (Fong and Vedaldi 2018;Kim et al 2018;Bachman, Hjelm, and Buchwalter 2019;Jeon, Jeong, and Choi 2020). We plan to explore this phenomenon further in our future work.…”
Section: Analysis For the Distance Metricssupporting
confidence: 77%
“…Another work (Shen et al 2020) trains a linear classifier based on artifact-labeled data and removes artifacts by moving the latent code over the trained hyperplane. A sampling method with the trained generative boundaries was suggested to explain shared semantic information in the generator (Jeon, Jeong, and Choi 2020). Classifier-based defective internal featuremap unit identification was devised (Tousi et al 2021).…”
Section: Related Workmentioning
confidence: 99%
“…In this section, we present our main contribution, the concept of local activation and its relation with low visual fidelity for individual generations. From previous research (Bau et al 2019;Jeon, Jeong, and Choi 2020;Tousi et al 2021), we can presume that each internal featuremap unit in the generator handles a specific object (e.g., tree, glasses) for the final generation. In particular, an artifact that has low visual fidelity can also be considered as a type of object.…”
Section: Locally Activated Neurons In Gansmentioning
confidence: 99%
“…From the previous research [10] that shallow layers handle the abstract generation concepts and deeper layers handle localized information in GANs, we ablate the shallow layers from the first layer to the stopping layer l < L. To prevent the loss of semantic characteristics of a generation as pointed in Section 3.3, we adjust the magnitude of the original featuremaps instead of the simple zero ablation. Line 5 of Algorithm 1 states this soft ablation as,…”
Section: Sequential Correction Of Artifactsmentioning
confidence: 99%