2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00026
|View full text |Cite
|
Sign up to set email alerts
|

Pushing the Boundaries of View Extrapolation With Multiplane Images

Abstract: We explore the problem of view synthesis from a narrow baseline pair of images, and focus on generating highquality view extrapolations with plausible disocclusions. Our method builds upon prior work in predicting a multiplane image (MPI), which represents scene content as a set of RGBα planes within a reference view frustum and renders novel views by projecting this content into the target viewpoints. We present a theoretical analysis showing how the range of views that can be rendered from an MPI increases l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
96
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 271 publications
(113 citation statements)
references
References 43 publications
(43 reference statements)
0
96
1
Order By: Relevance
“…A great amount of effort has been dedicated to incorporate geometric information to the model. For example, [18,19,20,21,22,23] apply deep learning techniques to leverage geometry cues and learn to predict the novel view. The deep learning based-light field camera view interpolation [20,24] use a deep network to predict depth separately for every novel view.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A great amount of effort has been dedicated to incorporate geometric information to the model. For example, [18,19,20,21,22,23] apply deep learning techniques to leverage geometry cues and learn to predict the novel view. The deep learning based-light field camera view interpolation [20,24] use a deep network to predict depth separately for every novel view.…”
Section: Related Workmentioning
confidence: 99%
“…The deep learning based-light field camera view interpolation [20,24] use a deep network to predict depth separately for every novel view. Another line of work [18,19,21] cleverly extract a Multiplane image representation of the scene. This representation offers regularization that allows for an impressive stereo baseline extrapolation.…”
Section: Related Workmentioning
confidence: 99%
“…Soft3D introduced a view synthesis pipeline to synthesize accurate images, even in the occlusion areas, by a visibility refinement process. The original work was cited in many follow-on view synthesis investigations [ 27 , 28 , 29 ]. However, few investigations were introduced to iteratively update the PSS matching costs for the purpose of obtaining accurate 3D depth reconstruction.…”
Section: Previous Workmentioning
confidence: 99%
“…The iterative enhancement scheme of the network is similar to the refinement of the soft visibility of Soft3D. Srinivasan et al [ 29 ] introduce a view extrapolation method using the MPI training network. This network employs a 3D convolutional network to generate and train occlusion-free MPI from PSS volumes as network inputs.…”
Section: Previous Workmentioning
confidence: 99%
“…The multi-plane image (MPI) representation is another way to synthesize a 3D image that enables to render each pixel to get scene independent new views with consistent non-occlusion when multiple objects are involved in the scene. Each image plane is considered as a RGBA image belongs to the part of frustum with apex at lens and positioned at fixed equally spaced depth obtained as inverse of disparity [22], [23], [24], [25], [26], [27]. Recently, we have utilized MPIs as L number of fronto-parallel planes for synthesizing 3D perception image corresponding to L number of MRIs, and each MPIs are positioned in the DoFo zone belonging to image space with non-uniform interval using respective inter-depths [14].…”
Section: B Review On Layered Representation For 3d Perception Imagementioning
confidence: 99%