2021
DOI: 10.48550/arxiv.2111.13539
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GeoNeRF: Generalizing NeRF with Geometry Priors

Abstract: Figure 1. Our generalizable GeoNeRF model infers complex geometries of objects in a novel scene without per-scene optimization and synthesizes novel images of higher quality than the existing works: IBRNet [53] and MVSNeRF [6].

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…Contrastingly, MVSNeRF [9] trains on DTU [23] and tests on held out DTU scenes, real forward-facing dataset (RFF) [37], and Blender [37]. Various other works [24,31,66] have explored different experimental setups. In this work, in an attempt to fairly evaluate against prior works, we use two experimental settings.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Contrastingly, MVSNeRF [9] trains on DTU [23] and tests on held out DTU scenes, real forward-facing dataset (RFF) [37], and Blender [37]. Various other works [24,31,66] have explored different experimental setups. In this work, in an attempt to fairly evaluate against prior works, we use two experimental settings.…”
Section: Resultsmentioning
confidence: 99%
“…In contrast with these works, our method 1) does not require deep convolutional features, operating directly on linear projections of local patches, similarly to ViT [14]; 2) does not require volume rendering, producing the final colors directly from a reference set of patches; and 3) is independent of the input frame of reference, leveraging canonicalized ray, point and camera representations, which improves its generalization ability. Concurrent work on neural rendering generalizable to unseen scenes include GeoNeRF [24] and NeuRay [31], but both require at least partial depth maps during training.…”
Section: Image-based Renderingmentioning
confidence: 99%