2020
DOI: 10.1109/tvcg.2020.3030330
|View full text |Cite
|
Sign up to set email alerts
|

SAniHead: Sketching Animal-like 3D Character Heads Using a View-surface Collaborative Mesh Generative Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 56 publications
1
7
0
Order By: Relevance
“…Recently, researchers have also explored various schemes for interactive modeling from 2D sketch images [10], [16], [18], [19], [20], [38], [39]. In line with our work, DeepSketch2Face [10] proposed a sketch modeling system that allows users to create caricature heads from scratch.…”
Section: D Face From 2d Imagesupporting
confidence: 64%
See 2 more Smart Citations
“…Recently, researchers have also explored various schemes for interactive modeling from 2D sketch images [10], [16], [18], [19], [20], [38], [39]. In line with our work, DeepSketch2Face [10] proposed a sketch modeling system that allows users to create caricature heads from scratch.…”
Section: D Face From 2d Imagesupporting
confidence: 64%
“…However, since the 3D caricature shape is confined to the parametric caricature face model, DeepSketch2Face cannot faithfully reflect large deformations and wrinkle details presented in the sketch. To address this issue, SAni-Head [16] proposed a view-surface collaborative mesh generative network, which turns dual-view freehand sketches into animalmorphic heads. Nevertheless, it fails to synthesize novel shapes deviating from training datasets due to the restricted generalization ability of their network.…”
Section: D Face From 2d Imagementioning
confidence: 99%
See 1 more Smart Citation
“…In other words, such methods treat the inputs as hard constraints and generate shapes strictly corresponding to the inputs. Another group of methods adopt indirect reconstruction strategies: they generate shapes by deforming or refining intermediate shape proxies [Du et al 2020;Guillard et al 2021;Han et al 2017;Zhang et al 2021] to better approximate the geometric features contained in the input drawings. As mentioned earlier, the algorithm-generated line drawings strictly resemble the geometry features of source images or models.…”
Section: Granularitymentioning
confidence: 99%
“…Shape modeling from multi‐view sketches involves generating a 3D shape using sketches drawn from different viewpoints. Existing methods for shape modeling from multi‐view sketches often utilize multi‐branch inputs with fixed viewpoints [LGK*17,ZQG*20,DHF*20] or employ an iterative refinement strategy [DAI*18, CWC*22] to improve modeling quality. However, these methods still suffer from limitations such as the inability to retain faithful details depicted by input sketches [DAI*18, CWC*22] and the inefficiency of multi‐view interaction [LGK*17, LPL*18].…”
Section: Introductionmentioning
confidence: 99%