2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00519
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Domain Correspondence Learning for Exemplar-Based Image Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
305
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 253 publications
(305 citation statements)
references
References 26 publications
0
305
0
Order By: Relevance
“…Specially, it introduces a context preserving loss to learn identity function. CoCosNet [11] presents an end-to-end framework for exemplar-based image translation, and learns the dense semantic correspondence for cross-domain images by weakly supervised learning. A novel generative model named swapping autoencoder [82] shows excellent performance in image manipulation task, which encodes each image into two disentangled components and map the swapped features via an unsupervised manner.…”
Section: A Image-to-image Translationmentioning
confidence: 99%
See 4 more Smart Citations
“…Specially, it introduces a context preserving loss to learn identity function. CoCosNet [11] presents an end-to-end framework for exemplar-based image translation, and learns the dense semantic correspondence for cross-domain images by weakly supervised learning. A novel generative model named swapping autoencoder [82] shows excellent performance in image manipulation task, which encodes each image into two disentangled components and map the swapped features via an unsupervised manner.…”
Section: A Image-to-image Translationmentioning
confidence: 99%
“…In the first stage, the frames of target person F t and its extracted pose figures P t are fed into the generative adversarial network (GAN) to learn the mapping between the pose figures and person foreground. For the background texture, we use the spatially-adaptive denormalization (SPADE) block [83] to restore it by projecting the spatially structural style information to different activation locations [11]. In the second stage, we input the pose figures from source person P s and the frames of target person F t into the well-trained model to generate the person images I g under the source person's pose P s .…”
Section: Human Pose Transfer Network Architecture 1) Cross-domain mentioning
confidence: 99%
See 3 more Smart Citations