This paper presents a new watermarking framework, suitable for authentication of H.264 compressed videos. The authentication data is embedded as fragile, blind and erasable watermark with low video quality degradations. Because of using a fragile watermark, hard authentication is possible. In contrast to other approaches, the watermarking is done after the H.264 compression process. Hence, the authentication information can be embedded in already encoded videos. To reconstruct the original H.264 compressed video the watermark can be removed. The framework is based on a new transcoder, which analyses the original H.264 bit stream, computes a watermark, embeds the watermark and generates a new H.264 bit stream. To authenticate the video a hash value is used. This value is encrypted with a private key of an asymmetric cryptosystem. The payload of the watermark consists of the encrypted hash value and a certificate with the public key. Some skipped macroblock of the H.264 video are used to embed the watermark. A special process selects these macroblocks. This process sets the distribution and the number of skipped blocks as well as the number of embedded bits per block to achieve low video quality degradations and low data rate. To embed the watermark the performance of several approaches is discussed and analyzed. The result of the framework is a new watermarked H.264 bit stream. All data necessary for authentication are embedded and cannot get lost.
The current trend towards multi-core processors imposes the necessity of finding viable strategies to exploit the additional computational resources in media processing. Among the challenges for video decoding are the appropriate partitioning of decoder steps, efficient tracking of dependencies and resource allocation/synchronization for multiple threads with respect to the resulting overhead. In this paper, we propose two variants of multithreading with distributed synchronization. The first method is optimized for minimum latency decoding, necessary for conversational applications. The second method aims to maximize the total throughput at the cost of a higher latency. In addition, we propose a method of dynamic core usage in order to reduce the total allocated processing resources due to inter-process communication overhead. This method is based on a coarse grained complexity estimation. To implicitly adapt to different combinations of processor architectures, associated memory interfaces and power-saving states, the scheme is feedback assisted. By correlating the initial estimate with the actual required processing time, a sufficiently accurate prediction of the required number of cores for the image processing part can be obtained. Experimental results demonstrate the scaling abilities of up to factor 3.5 on a quad-core machine, as well as the limits of the proposed approach regarding the complexity of sequential bitstream processing. We demonstrate that real-time 4k resolution decoding is feasible on current mid-range PC hardware. For less demanding streams, the adaptive mode reduces the total required CPU resources by up to 10% compared to the greedy approach
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.