2021
DOI: 10.48550/arxiv.2103.00762
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

Abstract: Recent work [28, 5] has demonstrated that volumetric scene representations combined with differentiable volume rendering can enable photo-realistic rendering for challenging scenes that mesh reconstruction fails on. However, these methods entangle geometry and appearance in a "black-box" volume that cannot be edited. Instead, we present an approach that explicitly disentangles geometry-represented as a continuous 3D volume-from appearance-represented as a continuous 2D texture map. We achieve this by introduc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 38 publications
0
5
0
Order By: Relevance
“…In particular, NeRF [29] combines MLPs with differentiable volume rendering and achieves photorealistic view synthesis. Following works have tried to advance its performance on view synthesis [27,24]; other relevant works extend it to support other neural rendering tasks like dynamic view synthesis, relighting, and editing [23,31,1,40]. However, most prior works still follow the original NeRF and require an expensive per-scene optimization process.…”
Section: Neural Renderingmentioning
confidence: 99%
“…In particular, NeRF [29] combines MLPs with differentiable volume rendering and achieves photorealistic view synthesis. Following works have tried to advance its performance on view synthesis [27,24]; other relevant works extend it to support other neural rendering tasks like dynamic view synthesis, relighting, and editing [23,31,1,40]. However, most prior works still follow the original NeRF and require an expensive per-scene optimization process.…”
Section: Neural Renderingmentioning
confidence: 99%
“…In 3D approaches, a geometric model of the scene is estimated, edits are then applied in 3D, and mapped back to the video frames. This approach is most commonly used in a domain where strong 3D priors exist (for example, face filters in Snapchat), or on static scenes (e.g., [Luo et al 2020;Xiang et al 2021]).…”
Section: Introductionmentioning
confidence: 99%
“…Such patch-based algorithms typically need particular design to avoid seams between patches and do not consider the texture synthesis from more holistic or global perspective. With the recent advance of deep learning techniques, different methods [Groueix et al 2018;Kanazawa et al 2018;Xiang et al 2021] are proposed to learn the correspondences between 3D shape (e.g. meshes) and the texture space for realizing the texture transfer.…”
Section: Texture Transfermentioning
confidence: 99%
“…meshes) and the texture space for realizing the texture transfer. For instance, Xiang et al [2021] learn a mapping from the implicit 3D representation to the 2D texture map, such that one can change the appearance of a 3D model by swapping the 2D texture map. However, such a texture mapping approach may generate unnatural results if the texture image is mapped across object edges or boundaries.…”
Section: Texture Transfermentioning
confidence: 99%
See 1 more Smart Citation