2020
DOI: 10.1142/s0129065720500276
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron

Abstract: We propose a new supervised learning rule for multilayer spiking neural networks (SNN) that use a form of temporal coding known as rank-ordercoding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, termed S4NN, akin to traditional error backpropagation, yet based on latencies. We show how app… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
131
1
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 169 publications
(133 citation statements)
references
References 63 publications
0
131
1
1
Order By: Relevance
“…However, it is known that rate coding does not allow the network to use spike-times precisely which can, in turn, enable an SNN to encode more information or process information rapidly. Supervised learning based SNNs using latency-based coding scheme is a good way to decrease the energy consumption, compared to the rate-coding method (Mostafa, 2017 ; Comsa et al, 2019 ; Kheradpisheh and Masquelier, 2019 ; Zhou et al, 2019 ). In latency-based coding, pixel intensity is represented by the ascending order of incoming spikes, wherein higher intensity fires an earlier spike and vice-versa.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, it is known that rate coding does not allow the network to use spike-times precisely which can, in turn, enable an SNN to encode more information or process information rapidly. Supervised learning based SNNs using latency-based coding scheme is a good way to decrease the energy consumption, compared to the rate-coding method (Mostafa, 2017 ; Comsa et al, 2019 ; Kheradpisheh and Masquelier, 2019 ; Zhou et al, 2019 ). In latency-based coding, pixel intensity is represented by the ascending order of incoming spikes, wherein higher intensity fires an earlier spike and vice-versa.…”
Section: Discussionmentioning
confidence: 99%
“…As a result, more salient information about a feature is encoded as an earlier spike in the corresponding neuron leading to overall sparser activity in an SNN. Furthermore, the inference latency (or overall time steps required to process an input) can drastically decrease to few 10 time steps with appropriate learning methods instead of the usual 50–100 time steps incurred in rate coding schemes for AGD training (Mostafa, 2017 ; Comsa et al, 2019 ; Kheradpisheh and Masquelier, 2019 ; Roy et al, 2019 ; Zhou et al, 2019 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Another approach assumes that each neuron spikes precisely once in a given time period and computes gradients with respect to these spike times ( Figure 1B). Kheradpisheh and Masquelier (2020) showed that it yields state-of-the-art accuracy for spike latency-encoded versions of MNIST and fashion MNIST. In similar work, Comsa et al, 2019 not only achieved competitive (A) Instilling functions at the network level requires hidden neurons, which are neither connected to the input nor the network's output, to reduce their contribution to errors at the output level.…”
Section: Efficient Low-latency Processing With Single Precisely Timedmentioning
confidence: 99%
“…The method assumes extreme sparseness of spiking because every neuron emits, at most, one spike. This representation allows efficient eventdriven algorithms in which time represents itself, which translates algorithmically into a small memory footprint and lowpower computation at the network level (Kheradpisheh and Masquelier 2020;Gö ltz et al, 2019). Similar to a binary neural network, all processing occurs as a single volley of spikes propagates through the network.…”
Section: Efficient Low-latency Processing With Single Precisely Timedmentioning
confidence: 99%