2020
DOI: 10.48550/arxiv.2011.12950
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Space-time Neural Irradiance Fields for Free-Viewpoint Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(26 citation statements)
references
References 79 publications
0
26
0
Order By: Relevance
“…However, these neural representations above can only handle static scenes, and the literature on dynamic scene neural representation remains sparse. Recent work Ost et al 2020;Pumarola et al 2020;Rebain et al 2020;Tretschk et al 2020;Xian et al 2020] extend the approach NeRF [Mildenhall et al 2020a] using neural radiance field into the dynamic setting. They decompose the task into learning a spatial mapping from the canonical scene to the current scene at each time step and regressing the canonical radiance field.…”
Section: Related Workmentioning
confidence: 99%
“…However, these neural representations above can only handle static scenes, and the literature on dynamic scene neural representation remains sparse. Recent work Ost et al 2020;Pumarola et al 2020;Rebain et al 2020;Tretschk et al 2020;Xian et al 2020] extend the approach NeRF [Mildenhall et al 2020a] using neural radiance field into the dynamic setting. They decompose the task into learning a spatial mapping from the canonical scene to the current scene at each time step and regressing the canonical radiance field.…”
Section: Related Workmentioning
confidence: 99%
“…For neural modeling and rendering of dynamic scenes, NHR [64] embeds spatial features into sparse dynamic point-clouds, Neural Volumes [30] transforms input images into a 3D volume representation by a VAE network. More recently, [26,44,47,48,61,65,74] extend neural radiance field (NeRF) [36] into the dynamic setting. They learn a spatial mapping from the canonical scene to the current scene at each time step and regress the canonical radiance field.…”
Section: Blended Imagementioning
confidence: 99%
“…To regularize the training, Neural Body [33] combines NeRF with a deformable human body model (e.g., SMPL [26]). Despite the promising results, these general NeRF [19,53] and human-specific NeRF [13,32,33,35,50] methods must be optimized for each new video separately, and generalize poorly on unseen scenarios. Generalizable NeRFs [36,47,52] try to avoid the expensive per-scene optimization by imageconditioning using pixel-aligned features.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, neural radiance fields (NeRF) [28,13,19,32,33,35,36,47,50,52,53] have shown photo-realistic novel view synthesis results in per-scene optimization settings. To avoid the expensive per-scene training and improve the practicality, generalizable NeRFs [36,52,47] have been proposed which use image-conditioned, pixel-aligned features and achieve feed-forward view synthesis from sparse input views [36,52].…”
Section: Introductionmentioning
confidence: 99%