2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00540
|View full text |Cite
|
Sign up to set email alerts
|

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

Abstract: Under good conditions, Neural Radiance Fields (NeRFs) have shown impressive results on novel view synthesis tasks. NeRFs learn a scene's color and density fields by minimizing the photometric discrepancy between training views and differentiable renders of the scene. Once trained from a sufficient set of views, NeRFs can generate novel views from arbitrary camera positions. However, the scene geometry and color fields are severely under-constrained, which can lead to artifacts, especially when trained with few… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
75
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 191 publications
(94 citation statements)
references
References 57 publications
0
75
0
Order By: Relevance
“…Neural Radiance Fields (NeRF) [24] propose an effective implicit neural scene representation method to synthesize novel views by the single scene optimization. Many subsequent coordinate-based methods are inspired by it, and we can classify them into neural surface fields [27,42,37] and neural radiance fields [26,16,44] based on the shape representation differences. However, both ways have poor performance on large scenes due to the limited representation capability of MLP-based networks adopted in NeRF.…”
Section: Neural Scene Representation and Renderingmentioning
confidence: 99%
“…Neural Radiance Fields (NeRF) [24] propose an effective implicit neural scene representation method to synthesize novel views by the single scene optimization. Many subsequent coordinate-based methods are inspired by it, and we can classify them into neural surface fields [27,42,37] and neural radiance fields [26,16,44] based on the shape representation differences. However, both ways have poor performance on large scenes due to the limited representation capability of MLP-based networks adopted in NeRF.…”
Section: Neural Scene Representation and Renderingmentioning
confidence: 99%
“…However, it mainly considers the enhancement of the density aspect. RegNeRF [73] proposes to add constraints on both density and color to reduce the number of input images required, where the color constraint is patch-based supervision. Our method also introduces depth and patch-based supervision, but we propose a pre-training process using mesh rendering as pseudo ground truth and introduce the 3D voxel color prior, which provides more prior knowledge.…”
Section: Related Workmentioning
confidence: 99%
“…To mitigate this problem, we take advantage of the geometry regularization used in [36] and encourage the generated depth to be smooth, even from unobserved viewpoints. The regularization is based on the real-world observation that real geometry or depth tends to be smooth, and is more likely to be flat, and formulated such that depth for each pixel should not be too different from those of neighboring pixels.…”
Section: Loss Functionsmentioning
confidence: 99%
“…where H and W are the height and width of the generated depth map, and r i,j indicates the ray through pixel (i, j). Note that while [36] implements the geometry regularization by comparing overlapping patches, we utilize the full generated depth map D for our implementation.…”
Section: Loss Functionsmentioning
confidence: 99%