2019
DOI: 10.1007/978-3-030-11015-4_20
|View full text |Cite
|
Sign up to set email alerts
|

Deep Normal Estimation for Automatic Shading of Hand-Drawn Characters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 45 publications
0
25
0
Order By: Relevance
“…Therefore, we would like to implement a coarse-fine network structure to deal with higher resolution generation and generate normal maps for complex sketches. In addition, we think that the idea of multi-scale processing of the input sketch [7] may be helpful to achieve higher quality normal maps generation. In this work, we integrated the U-Net architecture into the discriminator for more accurate results, this approach may be improved to achieve better generation results.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, we would like to implement a coarse-fine network structure to deal with higher resolution generation and generate normal maps for complex sketches. In addition, we think that the idea of multi-scale processing of the input sketch [7] may be helpful to achieve higher quality normal maps generation. In this work, we integrated the U-Net architecture into the discriminator for more accurate results, this approach may be improved to achieve better generation results.…”
Section: Discussionmentioning
confidence: 99%
“…Su et al [SDY*18] propose a Generative Adversarial Network to interactively predict surface normals from sketch input. A similar work is presented in [HGPS18] that estimates normal maps from handdrawn characters. Li et al [LPL*18] propose a more generalized solution to model 3D shapes from 2D sketches.…”
Section: Related Workmentioning
confidence: 99%
“…Alternatives include some attempts to base illustration on machine learning [13], [14], [15], and [16]. Different from ours, their target is learning for lighting effects or shadowing, not colorization.…”
Section: Illustration Colorizationmentioning
confidence: 99%