Historically, steganographic schemes were designed in a way to preserve image statistics or steganalytic features. Since most of the state-of-the-art steganalytic methods employ a machine learning (ML) based classifier, it is reasonable to consider countering steganalysis by trying to fool the ML classifiers. However, simply applying perturbations on stego images as adversarial examples may lead to the failure of data extraction and introduce unexpected artefacts detectable by other classifiers. In this paper, we present a steganographic scheme with a novel operation called adversarial embedding, which achieves the goal of hiding a stego message while at the same time fooling a convolutional neural network (CNN) based steganalyzer. The proposed method works under the conventional framework of distortion minimization. Adversarial embedding is achieved by adjusting the costs of image element modifications according to the gradients backpropagated from the CNN classifier targeted by the attack. Therefore, modification direction has a higher probability to be the same as the sign of the gradient. In this way, the so called adversarial stego images are generated. Experiments demonstrate that the proposed steganographic scheme is secure against the targeted adversary-unaware steganalyzer. In addition, it deteriorates the performance of other adversary-aware steganalyzers opening the way to a new class of modern steganographic schemes capable to overcome powerful CNN-based steganalysis. Index TermsSteganography, steganalysis, adversarial machine learning. I. INTRODUCTIONImage steganography is the art and science of concealing covert information within images. It is usually achieved by modifying image elements, such as pixels or DCT coefficients. On the other side of the game, steganalysis aims to reveal the presence of secret information by detecting whether there are abnormal artefacts left by data embedding.The developing history of steganography and steganalysis is rich of interesting stories, as they compete with each other and they benefit and evolve from the competition [1]. The earliest steganographic method was implemented by substituting the least significant bits of image elements with message bits. The stego artefacts introduced by this method can be effectively detected by Chi-squared attack [2], or steganalytic features based on first-order statistics [3]. In this initial phase of the competition, statistical hypothesis testing or a simple linear classifier such as FLD (Fisher Linear Discriminant) could serve the need of steganalysis.The first-order statistics can be restored after data embedding, as done in [4]. The abnormal artefacts in the first-order statistics can also be avoided as in [5], [6]. As a consequence, more powerful steganalytic features based on the second-order statistics [7], [8] were proposed. In this period, advanced machine learning (ML) tools, such as SVM (Support Vector Machine), were operated on high-dimensional features (where the dimension is typically several hundreds). These met...
With the powerful deep network architectures, such as generative adversarial networks and variational autoencoders, large amounts of photorealistic images can be generated. The generated images, already fooling human eyes successfully, are not initially targeted for deceiving image authentication systems. However, research communities as well as public media show great concerns on whether these images would lead to serious security issues. In this paper, we address the problem of detecting deep network generated (DNG) images by analyzing the disparities in color components between real scene images and DNG images. Existing deep networks generate images in RGB color space and have no explicit constrains on color correlations; therefore, DNG images have more obvious differences from real images in other color spaces, such as HSV and YCbCr, especially in the chrominance components. Besides, the DNG images are different from the real ones when considering red, green, and blue components together. Based on these observations, we propose a feature set to capture color image statistics for detecting the DNG images. Moreover, three different detection scenarios in practice are considered and the corresponding detection strategies are designed. Extensive experiments have been conducted on face image datasets to evaluate the effectiveness of the proposed method. The experimental results show that the proposed method is able to distinguish the DNG images from real ones with high accuracies. Index TermsImage generative model, generative adversarial networks, fake image identification, image statistics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.