2017 IEEE Biomedical Circuits and Systems Conference (BioCAS) 2017
DOI: 10.1109/biocas.2017.8325230
|View full text |Cite
|
Sign up to set email alerts
|

Algorithm and hardware design of discrete-time spiking neural networks based on back propagation with binary activations

Abstract: We present a new back propagation based training algorithm for discrete-time spiking neural networks (SNN). Inspired by recent deep learning algorithms on binarized neural networks, binary activation with a straight-through gradient estimator is used to model the leaky integrate-fire spiking neuron, overcoming the difficulty in training SNNs using back propagation. Two SNN training algorithms are proposed: (1) SNN with discontinuous integration, which is suitable for rate-coded input spikes, and (2) SNN with c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(57 citation statements)
references
References 20 publications
0
47
0
Order By: Relevance
“…For example, to compare with existing onchip works, Yin et al (2017) presented a new BP based training algorithm for discrete-time SNNs by using a LIF neuron model with a gradient estimator. This paper introduced a ReLU-like gradient estimation method to avoid the zero-gradient issue in conventional SNNs using LIF neurons.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, to compare with existing onchip works, Yin et al (2017) presented a new BP based training algorithm for discrete-time SNNs by using a LIF neuron model with a gradient estimator. This paper introduced a ReLU-like gradient estimation method to avoid the zero-gradient issue in conventional SNNs using LIF neurons.…”
Section: Discussionmentioning
confidence: 99%
“…However, as the experiment results in Yin et al (2017) are based on off-chip training, we guess that this new BP algorithm still suffers from an efficient on-chip training method. We think that the main difference of Yin et al (2017) and our work is that Yin et al (2017) proposed a new BPbased learning algorithm while our work proposes a new BPlike learning algorithm (i.e., DFA) based on a state-of-the-art BP algorithm and efficiently implement it in hardware while delivering competitive accuracy. As a small scale low-power accelerator, Zheng and Pinaki (2018) proposed a hardwarefriendly STDP on-chip training algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…Designs targeting very large scale applications usually implement a large number of neurocores to maximize parallelization at chip level [15,38,49,122,148,160]. A large amount of publications report various neurocore organizations with designs optimized in accordance with the characteristics of the network topology to be mapped to the hardware [28,91,130,164,193], or with a targeted application [34,38,103,148,153,192,194,201].…”
Section: Core Organization For Low Power Spiking Neural Network Evalumentioning
confidence: 99%
“…This fact is directly explored in some digital spiking neural network implementations (Yin et al., 2017). Furthermore, the statefulness of the neurons and the filtering of their inputs is consistent with recurrent neural networks, even when the network is of the feedforward type.…”
Section: Distilling Machine Learning and Neuroscience For Neuromorphimentioning
confidence: 99%