This paper presents a technique for the efficient compression of high dynamic range video (HDR) sequences. Such video sequences usually represent several orders of magnitude of real-world luminance intensity levels. Therefore, they are mostly stored in a floating-point represention. In order to obtain a coded representation that is bit stream compatible with the H.264/AVC video coding standard, the float-valued HDR values have to be mapped to a suitable integer representation first. The mapping proposed in this paper is adapted to the dynamic range of each video frame. Furthermore, to compensate for the associated dynamic contrast variation across frames, a weighted prediction method and quantization adaptation are introduced. The experiments show that the proposed method offers highly efficient HDR video compression. Only a fraction of the bit rate of a non-adaptive reference method is required to represent an HDR video sequence at the same quality
We present a backwards compatible high dynamic range video coding framework based on H. 264/AVC. It allows to extract a standard low dynamic range (LDR) as well as high dynamic range (HDR) video from one compressed bit stream. A joint global and local inter-layer prediction method is proposed to reduce the redundancy between the LDR and HDR layers. It is based on a common color space which can represent HDR video data perceptually lossless. We show how the inter-layer prediction parameters can be estimated in a rate-distortion optimized way and efficiently encoded to reduce side information. Our evaluations demonstrate that the proposed framework performs best, compared to the state-of-the-art for arbitrary tone-mapping operators. W.r.t. simulcast it allows for up to 50% bit rate saving
This paper presents a system for the subjective evaluation of audio, video and audiovisual quality. The system combines the well known MUSHRA and SAMVIQ methods for evaluation of audio and video quality. The implementation of the system uses inexpensive commercial of the shelf hardware
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.