2020
DOI: 10.48550/arxiv.2004.01294
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera

Abstract: Figure 1: Dynamic Scene View Synthesis: (Left) A dynamic scene is captured from a monocular camera from the locations V 0 to V k . Each image captures people jumping at each time step (t = 0 to t = k). (Middle) A novel view from an arbitrary location between V 0 and V 1 (denoted as an orange frame) is synthesized with the dynamic contents observed at the time t = k. The estimated depth at V k is shown in the inset. (Right) For the novel view (orange frame), we can also synthesize the dynamic content that appea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 45 publications
(54 reference statements)
0
2
0
Order By: Relevance
“…It is also addressed as "multiview synthesis" in literature. For this task, some approaches have synthesized lifelike images [21,19,28,12,43,40]. However, they heavily rely on pose supervision or 3D annotation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It is also addressed as "multiview synthesis" in literature. For this task, some approaches have synthesized lifelike images [21,19,28,12,43,40]. However, they heavily rely on pose supervision or 3D annotation.…”
Section: Related Workmentioning
confidence: 99%
“…To have a better understanding of the improvements of the feedback connections, we also compare our approach with two effective few-shot recognition models: Matching Net (MN) [36] and Proto-Matching Net (PMN) [37]. Note that while conducting comparisons with other view synthesis models (such as [42,34,19,28,12,43,40]) and few-shot recognition models (such as [2,16,6]) are interesting, this is not the main focus of our paper. We aim to validate that the bowtie architecture outperforms the single-task models upon which it builds.…”
Section: Experimental Evaluationmentioning
confidence: 99%