High-capacity image steganography, aimed at concealing a secret image in a cover image, is a technique to preserve sensitive data, e.g., faces and fingerprints. Previous methods focus on the security during transmission and subsequently run a risk of privacy leakage after the restoration of secret images at the receiving end. To address this issue, we propose a framework, called Multitask Identity-Aware Image Steganography (MIAIS), to achieve direct recognition on container images without restoring secret images. The key issue of the direct recognition is to preserve identity information of secret images into container images and make container images look similar to cover images at the same time. Thus, we introduce a simple content loss to preserve the identity information, and design a minimax optimization to deal with the contradictory aspects. We demonstrate that the robustness results can be transferred across different cover datasets. In order to be flexible for the secret image restoration in some cases, we incorporate an optional restoration network into our method, providing a multitask framework. The experiments under the multitask scenario show the effectiveness of our framework compared with other visual information hiding methods and state-of-the-art high-capacity image steganography methods.
Video steganography plays an important role in secret communication that conceals a secret video in a cover video by perturbing the value of pixels in the cover frames. Imperceptibility is the first and foremost requirement of any steganographic approach. Inspired by the fact that human eyes perceive pixel perturbation differently in different video areas, a novel effective and efficient Deeply‐Recursive Attention Network (DRANet) for video steganography to find suitable areas for information hiding via modelling spatio‐temporal attention is proposed. The DRANet mainly contains two important components, a Non‐Local Self‐Attention (NLSA) block and a Non‐Local Co‐Attention (NLCA) block. Specifically, the NLSA block can select the cover frame areas which are suitable for hiding by computing the correlations among inter‐ and intra‐cover frames. The NLCA block aims to effectively produce the enhanced representations of the secret frames to enhance the robustness of the model and alleviate the influence of different areas in the secret video. Furthermore, the DRANet reduces the model parameters by performing similar operations on the different frames within an input video recursively. Experimental results show the proposed DRANet achieves better performance with fewer parameters than the state‐of‐the‐art competitors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.