Depth-image-based rendering (DIBR) is used to generate additional views of a real-world scene from images or videos and associated per-pixel depth information. An inherent problem of the view synthesis concept is the fact that image information which is occluded in the original view may become visible in the "virtual" image. The resulting question is: how can these disocclusions be covered in a visually plausible manner? In this paper, a new temporally and spatially consistent hole filling method for DIBR is presented. In a first step, disocclusions in the depth map are filled. Then, a background sprite is generated and updated with every frame using the original and synthesized information from previous frames to achieve temporally consistent results. Next, small holes resulting from depth estimation inaccuracies are closed in the textured image, using methods that are based on solving Laplace equations. The residual disoccluded areas are coarsely initialized and subse quently refined by patch-based texture synthesis. Experimental results are presented, highlighting that gains in objective and visual quality can be achieved in comparison to the latest MPEG view synthesis reference software (VSRS)
In this paper, novel intra prediction methods based on image inpainting approaches are proposed. The H.264/AVC intra prediction modes are not well suited for processing complex textures at low bit rates. Our algorithm utilizes an efficient combination of partial differential equations (PDEs) and patch-based texture synthesis in addition to the standard directional predictors. Bit rate savings up to 3.5% compared to that of the H.264/AVC standard are shown
Depth image-based rendering (DIBR) techniques allow for a wide variety of 3-D applications, including synthesizing additional virtual views in a multiview-video-plus-depth (MVD) representation. The MVD format consists of scene texture and depth information for a limited number of original views of the same scene. One of the main obstacles in the DIBR technique lies in the disocclusion problem which results from the fact that a scene can only be observed from a set of original views. This can lead to missing information in the generated virtual views, especially in extrapolation scenarios. Our work describes a novel algorithm that synthesizes such disoccluded textures. The proposed synthesizer enhances the visual experience by taking spatial and temporal video information into account. In order to compensate for global motion in sequences, image registration is incorporated into the framework. Objective and subjective gains are shown compared to three state-of-the-art approaches
This paper addresses the problem of evaluating virtual view synthesized images in the multi-view video context. As a matter of fact, view synthesis brings new types of distortion. The question refers to the ability of the traditional used objective metrics to assess synthesized views quality, considering the new types of artifacts. The experiments conducted to determine their reliability consist in assessing seven different view synthesis algorithms. Subjective and objective measurements have been performed. Results show that the most commonly used objective metrics can be far from human judgment depending on the artifact to deal with
This paper considers the reliability of usual assessment methods when evaluating virtual synthesized views in the multiview video context. Virtual views are generated from Depth Image Based Rendering (DIBR) algorithms. Because DIBR algorithms involve geometric transformations, new types of artifacts come up. The question regards the ability of commonly used methods to deal with such artifacts. This paper investigates how correlated usual metrics are to human judgment. The experiments consist in assessing seven different view synthesis algorithms by subjective and objective methods. Three different 3D video sequences are used in the tests. Resulting virtual synthesized sequences are assessed through objective metrics and subjective protocols. Results show that usual objective metrics can fail assessing synthesized views, in the sense of human judgment
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.