2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00419
|View full text |Cite
|
Sign up to set email alerts
|

Monocular Neural Image Based Rendering With Continuous View Control

Abstract: Interactive novel view synthesis: Given a single source view our approach can generate a continuous sequence of geometrically accurate novel views under fine-grained control. Top: Given a single street-view like input, a user may specify a continuous camera trajectory and our system generates the corresponding views in real-time. Bottom: An unseen hires internet image is used to synthesize novel views, while the camera is controlled interactively. Please refer to our project homepage † .

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(3 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…When synthesizing with multiple input views, 3D structural representations are often leveraged such as classical multiview geometry [10,11,18,26,53,76], deep voxel representations [31,54], and neural radiance fields [38,66]. Recently, researchers have also proposed to perform singleimage view synthesis to bring a static photo to life [20,25,28,45,55,57,70,72]. For example, Wiles et al [70] propose to perform view synthesis using 3D point clouds as intermediate representations.…”
Section: Related Workmentioning
confidence: 99%
“…When synthesizing with multiple input views, 3D structural representations are often leveraged such as classical multiview geometry [10,11,18,26,53,76], deep voxel representations [31,54], and neural radiance fields [38,66]. Recently, researchers have also proposed to perform singleimage view synthesis to bring a static photo to life [20,25,28,45,55,57,70,72]. For example, Wiles et al [70] propose to perform view synthesis using 3D point clouds as intermediate representations.…”
Section: Related Workmentioning
confidence: 99%
“…Generative view synthesis. Novel view synthesis aims to produce new views of a scene from single [8,32,38,59,68,75,76,83,84,92,95] or multiple image observations [2,11,17,40,49,[52][53][54]66,70,73,89,100] by constructing a local or global 3D scene representation. However, most prior methods can only interpolate or extrapolate a limited distance from the input views, and do not possess a generative ability.…”
Section: Related Workmentioning
confidence: 99%
“…3D-Aware Image Synthesis: Learning-based novel view synthesis has been intensively investigated in the literature [12,33,51,[56][57][58][59]64,70]. These methods generate unseen views from the same object and typically require camera viewpoints as supervision.…”
Section: Related Workmentioning
confidence: 99%