Proceedings of the 27th ACM International Conference on Multimedia 2019
DOI: 10.1145/3343031.3350980
|View full text |Cite
|
Sign up to set email alerts
|

Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation

Abstract: In this work, we propose a novel Cycle In Cycle Generative Adversarial Network (C 2 GAN) for the task of keypoint-guided image generation. The proposed C 2 GAN is a cross-modal framework exploring a joint exploitation of the keypoint and the image data in an interactive manner. C 2 GAN contains two different types of generators, i.e., keypoint-oriented generator and image-oriented generator. Both of them are mutually connected in an end-to-end learnable fashion and explicitly form three cycled sub-networks, i.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
80
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(81 citation statements)
references
References 38 publications
1
80
0
Order By: Relevance
“…Tang et al [67] proposed a keypoint-guided image generation method called Cycle In Cycle Generative Adversarial Network, which can generate photo-realistic person pose images. Ma et al [68] proposed a person image generation method called Pose Guided Person Generation Network, and it can synthesize high-quality person images with arbitrary poses based on a person image and a pose.…”
Section: ) Other Methodsmentioning
confidence: 99%
“…Tang et al [67] proposed a keypoint-guided image generation method called Cycle In Cycle Generative Adversarial Network, which can generate photo-realistic person pose images. Ma et al [68] proposed a person image generation method called Pose Guided Person Generation Network, and it can synthesize high-quality person images with arbitrary poses based on a person image and a pose.…”
Section: ) Other Methodsmentioning
confidence: 99%
“…The reconstruction model reconstructs the fingerprint image x given S x . We cast the fingerprint reconstruction as image-to-image translation [29], [30] where the minutiae set is first converted to a minutiae map M x ∈ R h×w×3 following [27], [31], [32]. M x is encoded by a latent vector w∈ R 512 using a Minutiae-To-Style CNN (Section III-C), that is fed to the pretrained generator G to reconstruct the input image x.…”
Section: B Fingerprint Reconstructionmentioning
confidence: 99%
“…Therefore, we modify our model with some classic approaches that are widely used in GANs or related works, and the modified models are set as the baselines: -Img&RF: To enable the RF-based condition setting, we propose a RF-Extractor with RNN to encode RF heatmaps and use RF-InNorm to inject the extracted information. Another alternative approach is to concatenate the RF condition with the input image directly, which is effective when the conditions have explicit guidance for GAN, e.g., pose-guided human synthesis [25]. However, the RF conditions are obscure data and have totally different spatial structures with optical images.…”
Section: Baselinesmentioning
confidence: 99%
“…Researchers have explored various kinds of conditions, e.g., category labels [19], text descriptions [20]- [22], and images [23]- [31]. From technology perspective, most existing GAN models require the conditions to be able to guide the GAN model explicitly [19], [23], [25], [26], [31] or can be transformed to conditional variables for GAN using an existing pre-trained model [20]- [22].…”
Section: Introductionmentioning
confidence: 99%