2020
DOI: 10.1111/cgf.14186
|View full text |Cite
|
Sign up to set email alerts
|

Structural Analogy from a Single Image Pair

Abstract: The task of unsupervised image‐to‐image translation has seen substantial advancements in recent years through the use of deep neural networks. Typically, the proposed solutions learn the characterizing distribution of two large, unpaired collections of images, and are able to alter the appearance of a given image, while keeping its geometry intact. In this paper, we explore the capabilities of neural networks to understand image structure given only a single pair of images, A and B. We seek to generate images … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 40 publications
0
22
0
Order By: Relevance
“…Single-image generative models have been explored for domains other than textures. GANs trained on a single image have been used for image retargeting [61], deep image analogies [62], or for learning a single-sample image generative model [63], [64], [65], [66], [67]. These methods, while powerful for natural images, are not well-behaved for textures, as shown in [15] and [18].…”
Section: Deep Internal Learningmentioning
confidence: 99%
“…Single-image generative models have been explored for domains other than textures. GANs trained on a single image have been used for image retargeting [61], deep image analogies [62], or for learning a single-sample image generative model [63], [64], [65], [66], [67]. These methods, while powerful for natural images, are not well-behaved for textures, as shown in [15] and [18].…”
Section: Deep Internal Learningmentioning
confidence: 99%
“…Exploiting the power of deep neural networks in the image analogies problem was tackled by Liao et al [23] who, by assuming a semantic prior over an exemplar input image and a target one, propose a method capable of finding a bijective mapping between both inputs, enabling twoway stylizations. Single-image generative models [24] were extended to the image analogies problem in [25], by using convolutional neural networks to generate a new image with the style of an input style and the structure of another image. These methods, however, rely heavily on content or semantic features, making them vulnerable to lighting or geometric differences between the input images; and are computationally expensive, rendering them impractical for interactive applications.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, our model is trained in less time with a smaller computational footprint (1 minute vs 5 minutes). [23] and Structural Analogies [25].…”
Section: Interactive Stylizationsmentioning
confidence: 99%
See 2 more Smart Citations