The class of joint decoder in fingerprinting codes is of utmost importance in theoretical papers to establish the concept of fingerprint capacity. However, no implementation supporting a large user base is known to date. This paper presents an iterative decoder which is the first attempt toward practical large-scale joint decoding. The discriminative feature of the scores benefits on one hand from the side-information of previously found users, and on the other hand, from recently introduced universal linear decoders for compound channels. Neither the code construction nor the decoder makes assumptions about the collusion size and strategy, provided it is a memoryless and fair attack. The extension to incorporate soft outputs from the watermarking layer is straightforward. An extensive experimental work benchmarks the very good performance and offers a clear comparison with previous state-of-the-art decoders.
In this paper, we propose a blind watermarking method integrated in the JPEG2000 coding pipeline. Prior to the entropy coding stage, the binary watermark is placed in the independent code-blocks using Quantization Index Modulation (QIM). The quantization strategy allows to embed data in the detail subbands of low resolution as well as in the approximation image. Watermark recovery is performed without reference to the original image during image decompression. The proposed embedding scheme is robust to compression and other image processing attacks. We demonstrate two application scenarios: image authentication and copyright protection.
Abstract-In this article, we investigate a novel joint statistical model for subband coefficient magnitudes of the Dual-Tree Complex Wavelet transform which is then coupled to a Bayesian framework for Content-Based Image Retrieval. The joint model allows to capture the association among transform coefficients of the same decomposition scale and different color channels. It further facilitates to incorporate recent research work on modeling marginal coefficient distributions. We demonstrate the applicability of the novel model in the context of color texture retrieval on four texture image databases and compare retrieval performance to a collection of state-of-the-art approaches in the field. Our experiments further include a thorough computational analysis of the main building blocks, runtime measurements and an analysis of storage requirements. Eventually, we identify a model configuration with low storage requirements, competitive retrieval accuracy and a runtime behavior which enables the deployment even on large image databases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.