Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413734
|View full text |Cite
|
Sign up to set email alerts
|

Neural3D: Light-weight Neural Portrait Scanning via Context-aware Correspondence Learning

Abstract: Figure 1: Illustration of our Neural3D system, which achieves convenient and realistic neural reconstruction and freeviewpoint rendering of human portraits from only a single portable RGB camera.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 47 publications
0
3
0
Order By: Relevance
“…In the area of photo- realistic novel view synthesis and 3D scene reconstruction, neural rendering shows great power and huge potential. Various data representations are adopted to obtain better performance and characteristics, such as pointclouds [1,48,54], voxels [28,28], texture meshes [26,51] or implicit functions [31,34] and hybrid neural blending [46,47]. NHR [54] embeds spatial features into sparse dynamic point-clouds, Neural Volumes [28] transforms input images into a 3D volume representation by a VAE network.…”
Section: Related Workmentioning
confidence: 99%
“…In the area of photo- realistic novel view synthesis and 3D scene reconstruction, neural rendering shows great power and huge potential. Various data representations are adopted to obtain better performance and characteristics, such as pointclouds [1,48,54], voxels [28,28], texture meshes [26,51] or implicit functions [31,34] and hybrid neural blending [46,47]. NHR [54] embeds spatial features into sparse dynamic point-clouds, Neural Volumes [28] transforms input images into a 3D volume representation by a VAE network.…”
Section: Related Workmentioning
confidence: 99%
“…In the photo-realistic novel view synthesis and 3D scene modeling domain, differentiable neural rendering based on various data proxies achieves impressive results and becomes more and more popular. Various data representations are adopted to obtain better performance and characteristics, such as point-clouds [Aliev et al 2020;Suo et al 2020;], voxels [Lombardi et al 2019], texture meshes Shysheya et al 2019;Thies et al 2019] or implicit functions [Kellnhofer et al 2021;Mildenhall et al 2020;Park et al 2019] and hybrid neural blending [Jiang et al 2022a;Sun et al 2021;Suo et al 2021]. More recently, [Li et al 2020;Park et al 2020;Pumarola et al 2021;] extend neural radiance field [Mildenhall et al 2020] into the dynamic setting.…”
Section: Related Workmentioning
confidence: 99%
“…The recent progress of differentiable neural rendering brings huge potential for 3D scene modeling and photorealistic novel view synthesis. Researchers explore various data representations to pursue better performance and characteristics, such as point-clouds [2,58,64], voxels [31], texture meshes [27,60] or implicit functions [7,33,34,36,43,63]. However, these methods…”
Section: Related Workmentioning
confidence: 99%