2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00930
|View full text |Cite
|
Sign up to set email alerts
|

Space-time Neural Irradiance Fields for Free-Viewpoint Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
119
1

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 256 publications
(120 citation statements)
references
References 55 publications
0
119
1
Order By: Relevance
“…Unlike Li et al [2020], we do not rely on optical flow estimates or depth estimates for constraining the method. In contrast to Xian et al [2020], our results are sharper in the rigid regions of the scene, less blurred in overall and less prone to halo-effects in the transitions from the background to the foreground. Furthermore, our method does not rely on video depth supervision and we believe that it can handle larger changes in the scene.…”
Section: Neural Scene Representations and Neural Renderingcontrasting
confidence: 77%
“…Unlike Li et al [2020], we do not rely on optical flow estimates or depth estimates for constraining the method. In contrast to Xian et al [2020], our results are sharper in the rigid regions of the scene, less blurred in overall and less prone to halo-effects in the transitions from the background to the foreground. Furthermore, our method does not rely on video depth supervision and we believe that it can handle larger changes in the scene.…”
Section: Neural Scene Representations and Neural Renderingcontrasting
confidence: 77%
“…Another line of works learn scene representations for novel view synthesis from 2D images. Although this kind of methods achieves impressive renderings of static [Liu et al 2020a;Mildenhall et al 2020;Sitzmann et al 2019a,b;Zhang et al 2020] and dynamic scenes and enables playback and interpolation [Gafni et al 2020;Li et al 2020;Lombardi et al 2019;Park et al 2020a;Pumarola et al 2020;Raj et al 2020;Sida Peng 2020;Tretschk et al 2020;Xian et al 2020;Zhang et al 2020], it is not straightforward to extend these methods to synthesise human images of the full body with explicit control. Moreover, most of them are scene-specific.…”
Section: Classical and Neural Rendering Of Humansmentioning
confidence: 99%
“…• Neural Scene Flow Fields (Li et al, 2020) take a monocular video with known camera poses as input but use depth predictions as a prior, and regularize by also outputting scene flow, used in the loss. • Space-Time Neural Irradiance Fields (Xian et al, 2020) simply use time as an additional input. Carefully selected losses are needed to successfully train this method to render freeviewpoint videos (from RGBD data!).…”
Section: Dynamicmentioning
confidence: 99%