Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Application 2017
DOI: 10.5220/0006131500680076
|View full text |Cite
|
Sign up to set email alerts
|

Pushing the Limits for View Prediction in Video Coding

Abstract: More and more devices have depth sensors, making RGB+D video (colour+depth video) increasingly common. RGB+D video allows the use of depth image based rendering (DIBR) to render a given scene from different viewpoints, thus making it a useful asset in view prediction for 3D and free-viewpoint video coding. In this paper we evaluate a multitude of algorithms for scattered data interpolation, in order to optimize the performance of DIBR for video coding. This also includes novel contributions like a Kriging refi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…In our method, we instead render the point sets into synthetic images with a splatting technique [105,84]. Splatting can be processed in image space, enabling efficient and easily parallelizable implementations.…”
Section: Point Set Renderingmentioning
confidence: 99%
“…In our method, we instead render the point sets into synthetic images with a splatting technique [105,84]. Splatting can be processed in image space, enabling efficient and easily parallelizable implementations.…”
Section: Point Set Renderingmentioning
confidence: 99%
“…Similar to the method proposed in [19], we perform mean-shift clustering [24] of the projected points in each pixel with respect to the depth z i weighted with w i,j using a Gaussian kernel density estimator G(d, s 2 ), where s 2 denotes the kernel width. Starting from the depth value d 0 i = z i for each point i ∈ I j that contributes to the current pixel j, I j = {i : p j − y i < r}, the following expression is iterated until convergence…”
Section: Render Viewsmentioning
confidence: 99%
“…For the projection needed in our evaluation, we use our own, state-of-the-art projection algorithm, described in [15]. It consists of a flexible framework integrating a multitude of different methods, both state-of-the-art methods as well as own contributions, which were mainly introduced to counter artifacts we discovered during this work.…”
Section: View Projectionmentioning
confidence: 99%
“…Here, we present the PSNR values of the compressed depth-maps, as well as PSNR and Multi-Scale SSIM [16] results of projections using the compressed depth-maps, compared to projections using the ground-truth values. We use our own, state-of-the-art projection algorithm [15] for that. For our evaluation, we use the depth extension of the Sintel datasets [19] (see also figure 1), which provide ground-truth for RGB, depth and camera poses.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation