Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security 2017
DOI: 10.1145/3082031.3083248
|View full text |Cite
|
Sign up to set email alerts
|

JPEG-Phase-Aware Convolutional Neural Network for Steganalysis of JPEG Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
86
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 156 publications
(90 citation statements)
references
References 15 publications
0
86
0
2
Order By: Relevance
“…The BN normalizes the distribution of each feature to a zero-mean and a unit-variance, and eventually, scales and translates the distribution. The benefit of using a BN layer is that it desensitizes the training to the parameters initialization [18], allows the use of a larger learning rate which speeds up the learning, and improves the detection accuracy [7]. Note that similarly to ResNet [9], and in contrast to Xu-Net, we provide a BN layer accompanied by a scale layer.…”
Section: Yedroudj-netmentioning
confidence: 99%
See 1 more Smart Citation
“…The BN normalizes the distribution of each feature to a zero-mean and a unit-variance, and eventually, scales and translates the distribution. The benefit of using a BN layer is that it desensitizes the training to the parameters initialization [18], allows the use of a larger learning rate which speeds up the learning, and improves the detection accuracy [7]. Note that similarly to ResNet [9], and in contrast to Xu-Net, we provide a BN layer accompanied by a scale layer.…”
Section: Yedroudj-netmentioning
confidence: 99%
“…The results were close to those of the state-of-the-art. In [7], the network is built with a phase-split inspired by the JPEG compression process. An ensemble of CNNs was required to obtain results that were slightly better than those of the state-of-the-art.…”
Section: Introductionmentioning
confidence: 99%
“…In channel-wise convolution, each input channel corresponds to standalone K output channels and is convolved with an array of K kernels. As a result with J input channels we can get J × K output channels: The existing deep-learning steganalyzers [20]- [28], [30] incorporate the domain knowledge behind rich models, and initialize the kernels in the bottom convolutional layer as highpass filters to increase SNR (Signal-to-Noise Ratio) 3 . Fixed weights in the bottom kernels are adopted in most existing deep-learning steganalyzers [20]- [24], [26]- [28], [30], while learnable weights are adopted in Ye's model [25].…”
Section: A Preliminariesmentioning
confidence: 99%
“…Softmax / -recipe of prior works [25]- [28], [30]. The size of the output feature maps of the normal convolutional layers from bottom to top in this stage is 256 × 256, 128 × 128, and 32 × 32, respectively.…”
Section: Fully Connection /mentioning
confidence: 99%
“…Xu et al [10] designed a compact and effective CNN architecture with multiple batch normalization layers. This network now becomes a basic model for several more complex CNN models [11][12][13]. Wu et al [14][15] proposed a novel CNN model to detect steganography based on residual learning and achieved low detection error rates when cover images and stego images are paired.…”
Section: Introductionmentioning
confidence: 99%