2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.723
|View full text |Cite
|
Sign up to set email alerts
|

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

Abstract: SketchSketch + Color Generated results Figure 1. A user can sketch and scribble colors to control deep image synthesis. On the left is an image generated from a hand drawn sketch. On the right several objects have been deleted from the sketch, a vase has been added, and the color of various scene elements has been constrained by sparse color strokes. AbstractRecently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
339
0
3

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 468 publications
(342 citation statements)
references
References 44 publications
0
339
0
3
Order By: Relevance
“…To avoid this prerequisite, CycleGAN, DiscoGAN, and DualGAN were designed following the cycle consistency for training using unpaired data. These series of methods have been proven effective in various tasks, such as collection style transfer, object transfiguration, season transfer, and generating photographs from sketches . Motivated by them, we learn the two‐way mapping from water images and their corresponding surface geometries, with forward mapping reconstructing the water surfaces and backward mapping synthesizing photorealistic images of waves.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To avoid this prerequisite, CycleGAN, DiscoGAN, and DualGAN were designed following the cycle consistency for training using unpaired data. These series of methods have been proven effective in various tasks, such as collection style transfer, object transfiguration, season transfer, and generating photographs from sketches . Motivated by them, we learn the two‐way mapping from water images and their corresponding surface geometries, with forward mapping reconstructing the water surfaces and backward mapping synthesizing photorealistic images of waves.…”
Section: Related Workmentioning
confidence: 99%
“…These series of methods have been proven effective in various tasks, such as collection style transfer, object transfiguration, season transfer, and generating photographs from sketches. 30 Motivated by them, we learn the two-way mapping from water images and their corresponding surface geometries, with forward mapping reconstructing the water surfaces and backward mapping synthesizing photorealistic images of waves. Moreover, we include a subnetwork in our framework to extract and reuse the lighting conditions from existing images.…”
Section: Related Workmentioning
confidence: 99%
“…Sketch Inversion [13] is a deep neural network for inverting face sketches to synthesize photorealistic face images in the wild. Scribbler [12] is a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms or faces. Pix2pix [30] is a conditional adversarial network (cGAN) [31]; it learns the mapping from an input image to an output image.…”
Section: Sketch To Image and Style Transformmentioning
confidence: 99%
“…By using both L1 loss and adversarial loss, the result is more realistic and closer to the human sensory vision. Unlike other works that generate images by sketches, they are usually generating single and straightforward scale images such as faces or bedrooms [12,13]. To produce multi-scale Chinese paintings, we propose a multiscale generative model that consists of a full convolution layer [14], and this generative model can create artworks that have not only local detail information, but also the global framework of the painting.…”
Section: Introductionmentioning
confidence: 99%
“…Conditional GAN-based image translation [26,45,63] models have shown remarkable success at taking an abstract input, such as an edge map or a semantic segmentation map, and translating it to a real image. Combining this with a user interface allows a user to quickly create images in the target domain.…”
Section: Introductionmentioning
confidence: 99%