Traditional steganography methods often hide secret data by establishing a mapping relationship between secret data and a cover image or directly in a noisy area, but has a low embedding capacity. Based on the thought of deep learning, in this paper, we propose a new image steganography scheme based on a U-Net structure. First, in the form of paired training, the trained deep neural network includes a hiding network and an extraction network; then, the sender uses the hiding network to embed the secret image into another full-size image without any modification and sends it to the receiver. Finally, the receiver uses the extraction network to reconstruct the secret image and original cover image correctly. The experimental results show that the proposed scheme compresses and distributes the information of the embedded secret image into all available bits in the cover image, which not only solves the obvious visual cues problem, but also increases the embedding capacity.
Repetitive experience with the same visual stimulus and task can remarkably improve behavioral performance on the task. This wellknown perceptual-learning phenomenon is usually specific to the trained retinal-or visual-field location, which is taken as an indication of plastic changes in retinotopic visual areas. In previous studies of perceptual learning, however, a change in stimulus location on the retina is accompanied by positional changes of the stimulus in nonretinotopic frames of reference, such as relative to the head and other objects. It is unclear, therefore, whether the putative location specificity is exclusively retinotopic or if it could also depend on nonretinotopic representation of the stimulus, which is particularly important for multisensory and sensorimotor integration as well as for maintenance of stable visual percepts. Here, by manipulating subjects' gaze direction to control spatial and retinal locations of stimuli independently, we found that, when the stimulated retinal regions were held constant, the improvement with training in motion-direction discrimination of two successively displayed stimuli was restricted to the relative spatial position of the stimuli but independent of their absolute locations in head-and world-centered frame. These findings indicate location specificity of perceptual learning beyond retinotopic frame of reference, suggesting a pliable spatiotopic mechanism that can be specifically shaped by experience for better spatiotemporal integration of the learned stimuli.coordinate system | motion discrimination | retinotopic specificity | spatiotopic specificity | plasticity V isual information can be encoded not only in eye-centered (i.e., retinotopic) but also in nonretinotopic reference frames, such as the head-, world-, or object-centered coordinate systems. Psychophysical studies have revealed both retinotopic and spatiotopic processing mechanisms in some visual tasks (1-6). In a motion-detection task, for instance, two subthreshold stimuli can be temporally integrated when they appear at the same retinal location (retinotopic integration) or at different retinal locations but at the same spatial location (spatiotopic integration) when gaze shift is involved (1). Electrophysiological and functional MRI (fMRI) studies have shown that many cortical areas can represent visual stimuli in head-, body-, world-, or object-centered coordinates (7-13). In addition to multiple reference frames, dynamic remapping of retinotopic receptive fields (RFs) of neurons around the time of saccadic eye movement could also contribute to spatiotopic processing (14-16). Extraretinotopic processing is not only suited for multisensory integration and sensory-motor control (17, 18) but also related to some essential functions of the visual system, such as mediating spatiotopic temporal integration of visual stimuli and maintaining stable and continuous visual percepts across eye movements (19).Although nonretinotopic visual processing has been widely explored from different perspectives,...
File fragment classification is an important step in digital forensics. The most popular method is based on traditional machine learning by extracting features like Ngram, Shannon entropy or Hamming weights. However, these features are far from enough to classify file fragments. In this paper, we propose a novel scheme based on fragment-tograyscale image conversion and deep learning to extract more hidden features and therefore improve the accuracy of classification. Benefit from the multi-layered feature maps, our deep convolution neural network (CNN) model can extract nearly ten thousands of features through the non-linear connections between neurons. Our proposed CNN model was trained and tested on the public dataset GovDocs. The experiments results show that we can achieve 70.9% accuracy in classification, which is higher than those of existing works.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.