Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images.
During the process of watching 3D synthesised video (3D‐SV) and switching viewpoints, there is a case of asymmetric distortion, the left(right) viewpoint is a synthesised video generated by rendering technique, and the right(left) viewpoint is a real video taken by the camera. How to accurately estimate the quality of 3D‐SV with binocular asymmetric distortions is a new and challenging problem. Aiming at this problem, a blind quality assessment method for 3D‐SV with binocular asymmetric distortions is proposed. Firstly, the local edge deformations of synthesised videos at different scales are measured by calculating their standard deviations. Secondly, the global naturalness of synthesised videos is computed by analysing their natural statistical characteristics. Thirdly, a strategy for fusing left and right quality scores is proposed, which considers their texture information in different directions. Finally, the random forest is used to obtain an objective quality score. The experimental results show the superiority of the proposed method on asymmetry 3D‐SV database.
For synthesized videos, view synthesis and compression are two main types of artefacts influencing the perceived quality. Moreover, vertical disparity co‐exists with horizontal disparity in windowed six degrees of freedom (6DoF) system, and the viewers have some possible movement when exploring the content. This letter constructs a windowed 6DoF synthesized video quality database involving 128 distorted videos, using four sequences, four levels of compression, and four rendering schemes. The impact of two navigation trajectories on the perceived quality of windowed 6DoF videos is studied subjectively. The authors also evaluate the performances of several objective quality metrics on the proposed database; the results reveal the effectiveness and necessary of our database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.