2020
DOI: 10.1111/cgf.14165
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneous Multi‐Attribute Image‐to‐Image Translation Using Parallel Latent Transform Networks

Abstract: Image-to-image translation has been widely studied. Since real-world images can often be described by multiple attributes, it is useful to manipulate them at the same time. However, most methods focus on transforming between two domains, and when they chain multiple single attribute transform networks together, the results are affected by the order of chaining, and the performance drops with the out-of-domain issue for intermediate results. Existing multi-domain transfer methods mostly manipulate multiple attr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…The foreground is directly blended into the background by optimizing the Poisson blending loss and the style and content losses computed from the deep network, combined with iterative pixel updates using the L-BFGS solver to reconstruct blended regions [88]. Xu et al [89] utilized Latent transform networks (LTNs) to combine attributes while transforming features between images in parallel. Sbai et al [90] considered foreground category similarity when synthesizing pictures to control the authenticity of the overall photo, so as to achieve a more interesting effect of composite photos.…”
Section: Image Renderingmentioning
confidence: 99%
See 1 more Smart Citation
“…The foreground is directly blended into the background by optimizing the Poisson blending loss and the style and content losses computed from the deep network, combined with iterative pixel updates using the L-BFGS solver to reconstruct blended regions [88]. Xu et al [89] utilized Latent transform networks (LTNs) to combine attributes while transforming features between images in parallel. Sbai et al [90] considered foreground category similarity when synthesizing pictures to control the authenticity of the overall photo, so as to achieve a more interesting effect of composite photos.…”
Section: Image Renderingmentioning
confidence: 99%
“…Compared with supervised deep learning, unsupervised deep learning appeared relatively earlier. Earlier appearance means more integration with Zhang et al [88]; Xu et al [89]; Sbai et al [90] traditional image information and methods. This type of method learns various information in the image through a large amount of data, and calculates the matching score within the image, thereby improving the matching degree between the background and the foreground.…”
Section: Image Renderingmentioning
confidence: 99%