2018 - 3dtv-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3dtv-Con) 2018
DOI: 10.1109/3dtv.2018.8478484
|View full text |Cite
|
Sign up to set email alerts
|

Depth Image Based View Synthesis With Multiple Reference Views for Virtual Reality

Abstract: This paper presents a method for view synthesis from multiple views and their depth maps for free navigation in Virtual Reality with six degrees of freedom (6DoF) and 360 video (3DoF+), including synthesizing views corresponding to stepping in or out of the scene. Such scenarios should support large baseline view synthesis, typically going beyond the view synthesis involved in light field displays [1]. Our method allows to input an unlimited number of reference views, instead of the usual left and right refere… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4

Relationship

4
5

Authors

Journals

citations
Cited by 40 publications
(35 citation statements)
references
References 5 publications
(6 reference statements)
0
32
0
Order By: Relevance
“…IBR techniques overcome this problem by warping the image pixels to the new images in screen space. An specific IBR technique, Depth Image-Based Rendering (DIBR) [10,8,33,4], warps the input images depending on the motion parallax which is inversely proportional to the depth of the scene. The Immersive sub-community of the Moving Picture Experts Group (MPEG-I, developing standards for immersive video compression) divides DIBR in two steps after the cameras calibration: 1) depth estimation and 2) rendering.…”
Section: Related Workmentioning
confidence: 99%
“…IBR techniques overcome this problem by warping the image pixels to the new images in screen space. An specific IBR technique, Depth Image-Based Rendering (DIBR) [10,8,33,4], warps the input images depending on the motion parallax which is inversely proportional to the depth of the scene. The Immersive sub-community of the Moving Picture Experts Group (MPEG-I, developing standards for immersive video compression) divides DIBR in two steps after the cameras calibration: 1) depth estimation and 2) rendering.…”
Section: Related Workmentioning
confidence: 99%
“…The working principles of RVS are outlined in [5] (for its predecessor called SVS) and are very similar to those of [1]. The main differences are that (i) a disparity is given per pixel (a dense disparity map), (ii) the input camera views do not have to be parallel (the pixel shift is then replaced by a reprojection of an input view to the virtual view) and (iii) any triplet of adjacent pixels are implicitly connected through triangles in the OpenGL rendering process (which is the cause of the in Fig.…”
Section: View Synthesismentioning
confidence: 99%
“…Where E * (l) is the smallest error among the different search images for label l. It is computed using D * as the baseline between the reference image and the search image from which we selected E * (l), M R as the reliability map and C ij (l) as the matching costs found in (6).…”
Section: Graph Cutmentioning
confidence: 99%
“…Estimating high-quality depth remains a delicate process, but it is also crucial for creating synthesized views, where any depth imperfection may have a detrimental impact on the output quality [7,6]. So far, the Depth Estimation Reference Software (DERS) has been used (and reused from previous activities) for over 7 years and a lot of work has been invested, currently ending up in a quite powerful software: DERS8.0.…”
Section: Introductionmentioning
confidence: 99%