2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00704
|View full text |Cite
|
Sign up to set email alerts
|

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 62 publications
(31 citation statements)
references
References 34 publications
0
31
0
Order By: Relevance
“…The key difference between implicit neural networks and conventional fully connected networks is that the former can learn high frequency functions more effectively and, thus, can encode natural signals with higher fidelity. Owing to this unique ability, implicit neural networks have penetrated many tasks in computer vision such as texture generation [Henzler et al, 2020, Oechsle et al, 2019, Henzler et al, 2020, Xiang et al, 2021, shape representation [Chen and Zhang, 2019, Deng et al, 2020, Tiwari et al, 2021, Genova et al, 2020, Basher et al, 2021, Mu et al, 2021, Park et al, 2019, and novel view synthesis , Niemeyer et al, 2020, Saito et al, 2019, Sitzmann et al, 2019, Yu et al, 2021, Pumarola et al, 2021, Rebain et al, 2021, Park et al, 2021.…”
Section: Related Workmentioning
confidence: 99%
“…The key difference between implicit neural networks and conventional fully connected networks is that the former can learn high frequency functions more effectively and, thus, can encode natural signals with higher fidelity. Owing to this unique ability, implicit neural networks have penetrated many tasks in computer vision such as texture generation [Henzler et al, 2020, Oechsle et al, 2019, Henzler et al, 2020, Xiang et al, 2021, shape representation [Chen and Zhang, 2019, Deng et al, 2020, Tiwari et al, 2021, Genova et al, 2020, Basher et al, 2021, Mu et al, 2021, Park et al, 2019, and novel view synthesis , Niemeyer et al, 2020, Saito et al, 2019, Sitzmann et al, 2019, Yu et al, 2021, Pumarola et al, 2021, Rebain et al, 2021, Park et al, 2021.…”
Section: Related Workmentioning
confidence: 99%
“…NeRF [27] proposes to use a 5D function to represent the scene and applies volumetric rendering for novel view synthesis, achieving photo-realistic results and detailed geometry reconstruction. This powerful representation quickly receives attention and is extensively studied and applied in various fields [53,25], such as generative settings [6,38,30,5], dynamic scenes [21,32], and texture mapping [47]. In particular, we categorize recent progress by the design of the underlying functions into three classes: implicit, hybrid and explicit.…”
Section: Scene Representation With Nerfmentioning
confidence: 99%
“…NeRF [28] uses a global MLP to regress the volume density and view-dependent radiance at any arbitrary point in the space, and applies volume rendering to synthesize images at novel viewpoints. Following works extend the framework for different tasks such as relighting [3][4][5]39], scene editing [46] and dynamic scene modeling [23,31,32]. Similar to NeRF, most of these works train MLP networks, specific for each scene from scratch, which can take hours and even days to optimize, heavily time-consuming.…”
Section: Related Workmentioning
confidence: 99%