2022
DOI: 10.3390/s22249940
|View full text |Cite
|
Sign up to set email alerts
|

Improving Network Training on Resource-Constrained Devices via Habituation Normalization

Abstract: As a technique for accelerating and stabilizing training, the batch normalization (BN) is widely used in deep learning. However, BN cannot effectively estimate the mean and the variance of samples when training/fine-tuning with small batches of data on resource-constrained devices. It will lead to a decrease in the accuracy of the deep learning model. In the fruit fly olfactory system, the algorithm based on the “negative image” habituation model can filter redundant information and improve numerical stability… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
(64 reference statements)
0
1
0
Order By: Relevance
“…However, if the batch size is too large, it definitely causes memory overflow. Meanwhile, there is no gradient descent in the gradient direction of different batches, making it easy to fall into the local minimum (Lai et al, 2022). The hyperparameters of the 1D-CNN are optimized based on MFO, and the batch sizes are set to 32, 64, 128, 256, 512, and 1, 024.…”
Section: Resultsmentioning
confidence: 99%
“…However, if the batch size is too large, it definitely causes memory overflow. Meanwhile, there is no gradient descent in the gradient direction of different batches, making it easy to fall into the local minimum (Lai et al, 2022). The hyperparameters of the 1D-CNN are optimized based on MFO, and the batch sizes are set to 32, 64, 128, 256, 512, and 1, 024.…”
Section: Resultsmentioning
confidence: 99%