2012
DOI: 10.1117/12.909462
|View full text |Cite
|
Sign up to set email alerts
|

Reference frame selection for loss-resilient depth map coding in multiview video conferencing

Abstract: Multiview video in "texture-plus-depth" format enables decoder to synthesize freely chosen intermediate views for enhanced visual experience. Nevertheless, transmission of multiple texture and depth maps over bandwidthconstrained and loss-prone networks is challenging, especially for conferencing applications with stringent deadlines. In this paper, we examine the problem of loss-resilient coding of depth maps by exploiting two observations. First, different depth macroblocks have significantly different error… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2012
2012
2015
2015

Publication Types

Select...
5

Relationship

4
1

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 7 publications
0
10
0
Order By: Relevance
“…[10,11] studied the problem of how smartly encoded multiview video that facilitates view-switching can be replicated in storage-constrained distributed servers across a network to minimize view-switching delay. [12,13] investigated how texture and depth videos can be unequally protected to minimize the synthesized view distortion when streaming over a network prone to packet losses. None of these prior streaming work studied the problem of how video streams of different views can be optimally selected and shared among users observing different virtual views, however, which is the focus of this paper.…”
Section: Related Workmentioning
confidence: 99%
“…[10,11] studied the problem of how smartly encoded multiview video that facilitates view-switching can be replicated in storage-constrained distributed servers across a network to minimize view-switching delay. [12,13] investigated how texture and depth videos can be unequally protected to minimize the synthesized view distortion when streaming over a network prone to packet losses. None of these prior streaming work studied the problem of how video streams of different views can be optimally selected and shared among users observing different virtual views, however, which is the focus of this paper.…”
Section: Related Workmentioning
confidence: 99%
“…For example, toward the goal of loss resiliency, [15,16] proposed to exploit the flexibility in reference picture selection (RPS) [17] in H.264 video coding standard [18] to encode a visually important block in a current texture or depth frame using a reference frame further in the past as predictor, so that the probability of correct decoding can be improved. The error concealment problem-how to best recover lost information in streaming video given packet losses have already occurred [19]-has never been studied in the context of free viewpoint video, however.…”
Section: Related Workmentioning
confidence: 99%
“…In [10], a similar scheme to minimize expected synthesized view distortion based on selection of reference frame at the block level was proposed for depth maps only. In this work, we first extend the idea in [10] to encoding of both texture and depth maps, where the relative importance of texture and depth MBs must be determined.…”
Section: Related Workmentioning
confidence: 99%
“…In this work, we first extend the idea in [10] to encoding of both texture and depth maps, where the relative importance of texture and depth MBs must be determined. Second, we expand the coding modes available to each MB to include intra block coding.…”
Section: Related Workmentioning
confidence: 99%