The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.48550/arxiv.2205.15723
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes

Abstract: Modeling dynamic scenes is important for many applications such as virtual reality and telepresence. Despite achieving unprecedented fidelity for novel view synthesis in dynamic scenes, existing methods based on Neural Radiance Fields (NeRF) suffer from slow convergence (i.e., model training time measured in days). In this paper, we present DeVRF, a novel representation to accelerate learning dynamic radiance fields. The core of DeVRF is to model both the 3D canonical space and 4D deformation field of a dynami… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 37 publications
0
1
0
Order By: Relevance
“…Therefore, it is reasonable to extend the use of hybrid representation to combine with the previous dynamic NeRF methods (Pumarola The Thirty-Eighth AAAI Conference on Artificial Intelligence et al 2021) to accelerate dynamic scene synthesis. However, as been discovered in DeVRF (Liu et al 2022), the combination of hybrid representation and canonical space tends to yield overfitting results, which will produce artifacts (e.g., floaters, noisy geometric) on novel views.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, it is reasonable to extend the use of hybrid representation to combine with the previous dynamic NeRF methods (Pumarola The Thirty-Eighth AAAI Conference on Artificial Intelligence et al 2021) to accelerate dynamic scene synthesis. However, as been discovered in DeVRF (Liu et al 2022), the combination of hybrid representation and canonical space tends to yield overfitting results, which will produce artifacts (e.g., floaters, noisy geometric) on novel views.…”
Section: Introductionmentioning
confidence: 99%