2009
DOI: 10.1016/j.image.2008.10.013
|View full text |Cite
|
Sign up to set email alerts
|

View generation with 3D warping using depth information for FTV

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0
1

Year Published

2010
2010
2018
2018

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 254 publications
(63 citation statements)
references
References 9 publications
0
62
0
1
Order By: Relevance
“…The conventional view extrapolation result based on one primary view (PV) is denoted by PV, while the PV plus the full warping and the selective warping (SW) of one complementary view (CV) are referred to as PV+CV and PV+SW CV, respectively. Note that the conventional hole filling process based on image inpainting [14], [23] is disabled in the current setting as the objective of the experiments is to evaluate the hole size in terms of number of pixels in the holes by different approaches. Also a margin of 60 pixels to the image boundaries is not counted as the holes since the holes in this area are mainly due to the difference in the capture angle or range of each camera instead of disocclusion.…”
Section: Resultsmentioning
confidence: 99%
“…The conventional view extrapolation result based on one primary view (PV) is denoted by PV, while the PV plus the full warping and the selective warping (SW) of one complementary view (CV) are referred to as PV+CV and PV+SW CV, respectively. Note that the conventional hole filling process based on image inpainting [14], [23] is disabled in the current setting as the objective of the experiments is to evaluate the hole size in terms of number of pixels in the holes by different approaches. Also a margin of 60 pixels to the image boundaries is not counted as the holes since the holes in this area are mainly due to the difference in the capture angle or range of each camera instead of disocclusion.…”
Section: Resultsmentioning
confidence: 99%
“…26 3-D warping projects an image to another image plane. It can be decomposed into two fundamental steps.…”
Section: Proposed Object-based Depth Image-based-rendering Methods By mentioning
confidence: 99%
“…To ensure a free scene navigation with no switching-delay, the client chooses to download representations that permit to reconstruct all viewpoints in the navigation window w(u). Given a set of dowloaded representations, any virtual viewpoint u can be synthesized using a pair of left and right reference view images v L and v R in the downloaded set, with v L < u < v R , via a classical DIBR techniques, e.g., [34]. We denote by v any camera view (thus any possible anchor view) while u represents any viewpoint (either virtual viewpoint or camera view) that can be displayed during the navigation.…”
Section: System Modelmentioning
confidence: 99%