2015
DOI: 10.1186/s13634-015-0222-1
|View full text |Cite
|
Sign up to set email alerts
|

The effect of whitening transformation on pooling operations in convolutional autoencoders

Abstract: Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the preprocessing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to stu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…It should be noted that Eq. ( 3) corresponds to the so-called whitening 57,58 , where the data is transformed to exhibit zero mean and unity standard deviation. The last format is min-max normalization, defined as Herein, max(|X zc |) defines selection of column-vise maximum element.…”
Section: Proposed Modeling Approach: Fully Adaptive Regression Modelmentioning
confidence: 99%
“…It should be noted that Eq. ( 3) corresponds to the so-called whitening 57,58 , where the data is transformed to exhibit zero mean and unity standard deviation. The last format is min-max normalization, defined as Herein, max(|X zc |) defines selection of column-vise maximum element.…”
Section: Proposed Modeling Approach: Fully Adaptive Regression Modelmentioning
confidence: 99%
“…A Pooling layer in a convolutional network combines the outputs of neuron clusters in the previous layer to reduce the resolution of feature maps and achieve spatial invariance [9,10,27]. After pooling operation, computational cost is reduced and over-fitting can be avoided.…”
Section: Pooling Layermentioning
confidence: 99%
“…The pooling region can overlap each other in varying sizes [27]. Though several pooling methods have been proposed, average pooling and max pooling are still the most common methods.…”
Section: Pooling Layermentioning
confidence: 99%
“…Specifically, the SAE model hopes the neural network can recover input data through training, namelyx ðiÞ ¼ W white x ðiÞ . Considering the constraint of weight attenuation and implicit responses' sparsity, the overall cost function can be expressed as Formula (3) [11,12]:…”
Section: Saementioning
confidence: 99%