2020
DOI: 10.48550/arxiv.2006.10649
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-Density Sketch-to-Image Translation Network

Abstract: Sketch-to-image (S2I) translation plays an important role in image synthesis and manipulation tasks, such as photo editing and colorization. Some specific S2I translation including sketch-to-photo and sketch-to-painting can be used as powerful tools in the art design industry. However, previous methods only support S2I translation with a single level of density, which gives less flexibility to users for controlling the input sketches. In this work, we propose the first multi-level density sketch-to-image trans… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 33 publications
(58 reference statements)
0
2
0
Order By: Relevance
“…Sketch and photo based mutual generation (translation/synthesis) is a classic cross-modal topic of sketch research covering both: (i) sketch-to-photo generation [17], [273], [274], (ii) photo-to-sketch generation [55], [271], [272], [275]. In particular, sketch-to-photo generation methods have addressed: (a) sketch to photo [267], (b) sketch & photo to photo [22], [263], (c) sketch/edge & color to photo [58].…”
Section: Sketch-photo Generationmentioning
confidence: 99%
“…Sketch and photo based mutual generation (translation/synthesis) is a classic cross-modal topic of sketch research covering both: (i) sketch-to-photo generation [17], [273], [274], (ii) photo-to-sketch generation [55], [271], [272], [275]. In particular, sketch-to-photo generation methods have addressed: (a) sketch to photo [267], (b) sketch & photo to photo [22], [263], (c) sketch/edge & color to photo [58].…”
Section: Sketch-photo Generationmentioning
confidence: 99%
“…However, this type of style control is not accurate, and it can be applied only to images in a global way. Later works [47], [33], [15] used an example to semantically control the output images, while other works [56], [60], [16] tried to locally control attributes of the output such as color and pose, based on users' input. Most recently, motivated by the high quality of StyleGAN [22], some works begin to incorporate it into the I2I framework.…”
Section: Related Workmentioning
confidence: 99%