2018
DOI: 10.48550/arxiv.1808.04325
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Shape Deformation in Unsupervised Image-to-Image Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Intriguingly, this offers an explanation for a number of rather disconnected findings: CNNs match texture appearance for humans (Wallis et al, 2017), and their predictive power for neural responses along the human ventral stream appears to be largely due to human-like texture representations, but not human-like contour representations (Laskar et al, 2018;Long & Konkle, 2018). Furthermore, texture-based generative modelling approaches such as style transfer (Gatys et al, 2016), single image super-resolution (Gondal et al, 2018) as well as static and dynamic texture synthesis (Gatys et al, 2015; all produce excellent results using standard CNNs, while CNNbased shape transfer seems to be very difficult (Gokaslan et al, 2018). CNNs can still recognise images with scrambled shapes Brendel & Bethge, 2019), but they have much more difficulties recognising objects with missing texture information (Ballester & de Araújo, 2016;Yu et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…Intriguingly, this offers an explanation for a number of rather disconnected findings: CNNs match texture appearance for humans (Wallis et al, 2017), and their predictive power for neural responses along the human ventral stream appears to be largely due to human-like texture representations, but not human-like contour representations (Laskar et al, 2018;Long & Konkle, 2018). Furthermore, texture-based generative modelling approaches such as style transfer (Gatys et al, 2016), single image super-resolution (Gondal et al, 2018) as well as static and dynamic texture synthesis (Gatys et al, 2015; all produce excellent results using standard CNNs, while CNNbased shape transfer seems to be very difficult (Gokaslan et al, 2018). CNNs can still recognise images with scrambled shapes Brendel & Bethge, 2019), but they have much more difficulties recognising objects with missing texture information (Ballester & de Araújo, 2016;Yu et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…To overcome image deformation in unsupervised image translation, we combined the position-based selection strategy (Yang et al 2018) with multi-scale structure similarity loss (Wang et al 2003, Gokaslan et al 2018. We added skip connection to the generator to eliminate artifacts of the generated images by the generator (Gokaslan et al 2018).…”
Section: Normalizing Methodsmentioning
confidence: 99%
“…To overcome image deformation in unsupervised image translation, we combined the position-based selection strategy (Yang et al 2018) with multi-scale structure similarity loss (Wang et al 2003, Gokaslan et al 2018. We added skip connection to the generator to eliminate artifacts of the generated images by the generator (Gokaslan et al 2018). As new images of domain T could not have exactly the same characteristics as the training images of domain T, we also input new images of domain T into the generator to obtain a good consistency between all normalized images.…”
Section: Normalizing Methodsmentioning
confidence: 99%
“…In particular, the unpaired (or unsupervised) image-to-image translation has achieved an impressive progress based on variants of generative adversarial networks (GANs) Liu et al, 2017;Choi et al, 2017;Almahairi et al, 2018;Lee et al, 2018), and has also drawn considerable attention due to its practical applications including colorization , super-resolution (Ledig et al, 2017), semantic manipulation (Wang et al, 2018b), and domain adaptation (Bousmalis et al, 2017;Shrivastava et al, 2017;Hoffman et al, 2017). Previous methods on this line of research, however, often fail on challenging tasks, in particular, when the translation task involves significant changes in shape of instances or the images to translate contains multiple target instances (Gokaslan et al, 2018). Our goal is to extend image-to-image translation towards such challenging tasks, which can strengthen its applicability up to the next level, e.g., changing pants to skirts in fashion images for a customer to decide which one is better to buy.…”
Section: Introductionmentioning
confidence: 99%
“…To the best of our knowledge, we are the first to report image-to-image translation results for multiinstance transfiguration tasks. A few number of recent methods Liu et al, 2017;Gokaslan et al, 2018) show some transfiguration results but only for images with a single instance often in a clear background. Unlike the previous results in a simple setting, our focus is on the harmony of instances naturally rendered with the background.…”
Section: Introductionmentioning
confidence: 99%