2019
DOI: 10.1109/access.2019.2914424
|View full text |Cite
|
Sign up to set email alerts
|

Intrinsic Plasticity Based Inference Acceleration for Spiking Multi-Layer Perceptron

Abstract: Intrinsic plasticity (IP) mechanism was originally found in the biological neuron as a membrane potential adaptive tuning scheme, which was used to change the connection strength between neurons, so that animal brain had the ability to learn or store memory. Recently, in the field of artificial neural networks, the bio-inspired IP mechanism attracts increasingly research attention due to its ability of regulating neuron activity in a relative homeostatic level even if the external input of a neuron is extremel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…In many ANN cases, bias is considered a common constant within a layer or even set to zero. We can expect significantly better performances by introducing intrinsic plasticity into ANN or spiking neurons (Zhang and Li, 2019 ; Zhang et al, 2019 ). A similarity between the simplified intrinsic plasticity introduced in ANN and batch normalization has also been reported (Shaw et al, 2020 ).…”
Section: Optimization Strategy: Multiscale Credit Assignmentmentioning
confidence: 99%
“…In many ANN cases, bias is considered a common constant within a layer or even set to zero. We can expect significantly better performances by introducing intrinsic plasticity into ANN or spiking neurons (Zhang and Li, 2019 ; Zhang et al, 2019 ). A similarity between the simplified intrinsic plasticity introduced in ANN and batch normalization has also been reported (Shaw et al, 2020 ).…”
Section: Optimization Strategy: Multiscale Credit Assignmentmentioning
confidence: 99%
“…The output value is calculated as the input values passed through the layers sequentially in the forward direction, and the weights and biases in the model are trained through error back propagation [43]. MLP models are widely used for typical classification and regression tasks [44]. Neural network models, including MLP, have strong advantages, including high performance and no special processing requirements, but they have disadvantages, such as a large number of hyperparameters, proneness to overfitting, and a large computational burden.…”
Section: Neural Network Modelsmentioning
confidence: 99%
“…In [53], an IP mechanism, depends on threshold level, was applied to a multi-layer SNN. The IP mechanism adapted the firing rates of neurons at a steady level while maximizing the information entropy.…”
Section: Related Workmentioning
confidence: 99%