2019
DOI: 10.48550/arxiv.1911.10124
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Technical report: supervised training of convolutional spiking neural networks with PyTorch

Abstract: Recently, it has been shown that spiking neural networks (SNNs) can be trained efficiently, in a supervised manner, using backpropagation through time. Indeed, the most commonly used spiking neuron model, the leaky integrate-andfire neuron, obeys a differential equation which can be approximated using discrete time steps, leading to a recurrent relation for the potential. The firing threshold causes optimization issues, but they can be overcome using a surrogate gradient. Here, we extend previous approaches in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

4
3

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Due to the non-differentiability of the thresholding activation function in spiking neurons, it is not convenient to apply backpropagation and gradient descents to SNNs. Various solutions are proposed to tackle this problem including computing gradients with respect to the spike rates instead of single spikes [52,53,54,55], using differentiable smoothed spike functions [56], surrogate gradients for the threshold function in the backward pass [57,58,59,60,61,62], and transfer learning by sharing weights between the SNN and an ANN [49,63]. In another approach, known as latency learning, the neuron's activity is defined based on the firing time of its first spike, therefore, they do not need to compute the gradient of the thresholding function.…”
Section: Discussionmentioning
confidence: 99%
“…Due to the non-differentiability of the thresholding activation function in spiking neurons, it is not convenient to apply backpropagation and gradient descents to SNNs. Various solutions are proposed to tackle this problem including computing gradients with respect to the spike rates instead of single spikes [52,53,54,55], using differentiable smoothed spike functions [56], surrogate gradients for the threshold function in the backward pass [57,58,59,60,61,62], and transfer learning by sharing weights between the SNN and an ANN [49,63]. In another approach, known as latency learning, the neuron's activity is defined based on the firing time of its first spike, therefore, they do not need to compute the gradient of the thresholding function.…”
Section: Discussionmentioning
confidence: 99%
“…Before the audio signals move into the model, they should be preprocessed. For preprocessing, we use a similar approach as Zimmer et al ( 2019 ). We remove extra information from the audio signal, the model becomes simpler and also it is more robust to noise.…”
Section: Methodsmentioning
confidence: 99%
“…The differential equations of LIF models can be approximated by linear recurrent equations in discrete time. Introducing a reset term U R i [n] for the potential, the neuron dynamics can now be fully described by the following equations [13].…”
Section: Overview Of the Leaky Integrate-and-fire Model (Lif)mentioning
confidence: 99%