2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01352
|View full text |Cite
|
Sign up to set email alerts
|

Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
98
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 165 publications
(98 citation statements)
references
References 17 publications
0
98
0
Order By: Relevance
“…Our solution to the first problem is to add a parsing branch. Since the portrait parts of the same semantic category share similar motion patterns and texture information, it will be beneficial for the appearance and geometry learning in NeRF, which is also proven in recent implicit representation studies [73][74][75]81]. As shown in Fig.…”
Section: Semantic-aware Dynamic Ray Samplingmentioning
confidence: 89%
“…Our solution to the first problem is to add a parsing branch. Since the portrait parts of the same semantic category share similar motion patterns and texture information, it will be beneficial for the appearance and geometry learning in NeRF, which is also proven in recent implicit representation studies [73][74][75]81]. As shown in Fig.…”
Section: Semantic-aware Dynamic Ray Samplingmentioning
confidence: 89%
“…Our model also incorporates a learned camera pose refinement which has been explored in previous works [34,59,66,69,70]. Some NeRF-based methods use segmentation data to isolate and reconstruct static [67] or moving objects (such [3]. The first MLP fσ predicts the density σ for a position x in space.…”
Section: Novel View Synthesismentioning
confidence: 99%
“…Further, the inability to render scenes containing dynamic objects currently limits the applicability of Block-NeRF towards closed-loop simulation tasks in robotics. In the future, these issues could be addressed by learning transient objects during the optimization [40], or directly modeling dynamic objects [44,67]. In particular, the scene could be composed of multiple Block-NeRFs of the environment and individual controllable object NeRFs.…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Recently, several works [3,4,6,20,43,49] have extended NeRF and achieved impressive progress in decomposing the renderings of a scene into semantically meaningful components, including geometry, reflectance, material, and lighting, enabling a flexible interaction with any of these components, e.g. relighting and swapping the background.…”
Section: Introductionmentioning
confidence: 99%