2021
DOI: 10.1101/2021.05.20.445029
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Brain-inspired Weighted Normalization for CNN Image Classification

Abstract: We studied a local normalization paradigm, namely weighted normalization, that better reflects the current understanding of the brain. Specifically, the normalization weight is trainable, and has a more realistic surround pool selection. Weighted normalization outperformed other normalizations in image classification tasks on Cifar10, Imagenet and a customized textured MNIST dataset. The superior performance is more prominent when the CNN is shallow. The good performance of weighted normalization may be relate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 13 publications
(13 reference statements)
0
13
0
Order By: Relevance
“…Some studies in machine learning noticed the lack of more sophisticated forms of brain-like divisive normalization in generic feedforward CNNs, and tried to integrate them into the network [47–51]. These studies found that incorporating divisive normalization in CNNs improves image classification in some limited cases, such as when the network is more shallow [49], when the dataset requires strong center-surround separation [49], or when the divisive normalization is combined with batch normalization [50]. The correspondence we found between generic CNNs and the brain regarding center surround similarity may explain why including divisive normalization explicitly in CNNs has only limited improvement in classification, especially when the networks are deep.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Some studies in machine learning noticed the lack of more sophisticated forms of brain-like divisive normalization in generic feedforward CNNs, and tried to integrate them into the network [47–51]. These studies found that incorporating divisive normalization in CNNs improves image classification in some limited cases, such as when the network is more shallow [49], when the dataset requires strong center-surround separation [49], or when the divisive normalization is combined with batch normalization [50]. The correspondence we found between generic CNNs and the brain regarding center surround similarity may explain why including divisive normalization explicitly in CNNs has only limited improvement in classification, especially when the networks are deep.…”
Section: Discussionmentioning
confidence: 99%
“…Surround suppression has been considered to have a number of beneficial roles in neural computation, for example, reducing coding redundancy and yielding more efficient neural codes [12,14,19,21,22,24,25,27,60,61]. Some studies in machine learning noticed the lack of more sophisticated forms of brain-like divisive normalization in generic feedforward CNNs, and tried to integrate them into the network [47][48][49][50][51].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, why using this biological transform to improve artificial networks devoted to image segmentation? There are examples of the effectiveness of divisive normalization in many applications as image coding [9][10][11], restoration [12,13], distortion metrics [14][15][16], classification [17][18][19][20] or even referring to its appealing statistical properties [21][22][23][24][25]. Here we illustrate the effect of divisive normalization with an explicit example.…”
Section: Why Using Divisive Normalization?mentioning
confidence: 99%
“…Regarding the nature of γ, the above example points out that the consideration of spatial neighborhoods is convenient to get the local contrast equalization along the visual field required to overcome shadow, scattering or fog. The recent literature that exploits automatic differentiation shows a range of kernel structures: some do not consider spatial interactions (either in a dense [11,16,24] or convolutional [20] combinations of features); while others do with some restrictions (either uniform weights [29], a ring of locations [18,19], or special symmetries in the space [30]). Following biology [3,8,21,28] and the intuition pointed out in the previous section.…”
Section: Models and Experimentsmentioning
confidence: 99%