2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01781
|View full text |Cite
|
Sign up to set email alerts
|

NeRF-Editing: Geometry Editing of Neural Radiance Fields

Abstract: Implicit neural rendering, especially Neural Radiance Field (NeRF), has shown great potential in novel view synthesis of a scene. However, current NeRF-based methods cannot enable users to perform user-controlled shape deformation in the scene. While existing works have proposed some approaches to modify the radiance field according to the user's constraints, the modification is limited to color editing or object translation and rotation. In this paper, we propose a method that allows users to perform controll… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 107 publications
(69 citation statements)
references
References 77 publications
0
47
0
Order By: Relevance
“…For 2D novel view synthesis, we use the Ray-Box Intersection algorithm [10] to calculate near and far bounds for each object, and then rank rendered depths along each ray to achieve occlusion-aware scene-level rendering. This disentangled representation also opens up other types of fine-grained object-level manipulation, such as changing object shape or textures by conditioning on disentangled pre-trained feature fields [16,34], which we consider as an interesting future direction.…”
Section: Compositional Scene Renderingmentioning
confidence: 99%
“…For 2D novel view synthesis, we use the Ray-Box Intersection algorithm [10] to calculate near and far bounds for each object, and then rank rendered depths along each ray to achieve occlusion-aware scene-level rendering. This disentangled representation also opens up other types of fine-grained object-level manipulation, such as changing object shape or textures by conditioning on disentangled pre-trained feature fields [16,34], which we consider as an interesting future direction.…”
Section: Compositional Scene Renderingmentioning
confidence: 99%
“…Although the pre-training process is similar to ours, the aimed task, the specific methods used, and the implementation of pretraining are different. In addition to the above extensions, NeRF has been extended for dynamic scenes [37], [38], better rendering effects [39], [40], generalization on multiple scenes [41], [42], [43], [44], [45], faster training or inference speed [46], [47], [48], [49], [50], [51], [52], re-lighting rendering [53], [54], [55], geometry or appearance editing [56], [57], [58], [59], [60] and specifically for processing human bodies [61], [62], [63] and faces [64], [65]. Some works are briefly summarized in [66].…”
Section: Related Workmentioning
confidence: 99%
“…Similar with other NeRF based face modeling approaches like Ner-FACE and NHA, artifacts may occur in some local regions when extrapolating the expression coefficients to a value that is far from the training distribution. This problem might be circumvented by explicitly modeling the underlying geometry like some NeRF Editing approaches [Bao and Yang et al 2022;Yuan et al 2022], and we leave this as a future work.…”
Section: Limitationsmentioning
confidence: 99%