2017
DOI: 10.1109/tnnls.2017.2726060
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Learning Based on Temporal Coding in Spiking Neural Networks

Abstract: Gradient descent training techniques are remarkably successful in training analogvalued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
364
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 257 publications
(368 citation statements)
references
References 23 publications
4
364
0
Order By: Relevance
“…More recently, (Mostafa, 2016) used a temporal coding scheme where information is encoded in spike times instead of spike rates and the dynamics are cast in a differentiable form. As a result, the network can be trained using standard gradient descent to achieve very accurate, sparse and power-efficient classification.…”
Section: Discussionmentioning
confidence: 99%
“…More recently, (Mostafa, 2016) used a temporal coding scheme where information is encoded in spike times instead of spike rates and the dynamics are cast in a differentiable form. As a result, the network can be trained using standard gradient descent to achieve very accurate, sparse and power-efficient classification.…”
Section: Discussionmentioning
confidence: 99%
“…More recently, (Mostafa, 2016) used a temporal coding scheme where information is encoded in spike times instead of spike rates and the dynamics are cast in a differentiable form. As a result, the network can be trained using standard gradient descent to achieve very accurate, sparse and power-efficient classification.…”
Section: Relation To Prior Work In Spiking Deep Neural Networkmentioning
confidence: 99%
“…We addressed the connectivity limitations of neuromorphic chips by testing the lottery ticket hypothesis [19] with positive results achieving even less energy consumption. Previous studies [16,23,32] applied similar models to simple image datasets, where the features precise values are not necessary to solve the task, namely the task can be solved by using only black and white [32]. The novelty of the proposed work lies in showing how these models are able to process more complex continuous features such as Mel filterbank coefficients in a real-world application scenario, highlighting the power of the method and the high level of sparsity these features can be represented with, in contrast with ANN to SNN conversion methods that rely on spike rates to convey information [18] and to the spiking VAD solution proposed in [17] at 26.1 mW.…”
Section: Conclusion and Limitationsmentioning
confidence: 99%