The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.1016/j.neucom.2018.11.014
|View full text |Cite
|
Sign up to set email alerts
|

BP-STDP: Approximating backpropagation using spike timing dependent plasticity

Abstract: The problem of training spiking neural networks (SNNs) is a necessary precondition to understanding computations within the brain, a field still in its infancy. Previous work has shown that supervised learning in multi-layer SNNs enables bio-inspired networks to recognize patterns of stimuli through hierarchical feature acquisition. Although gradient descent has shown impressive performance in multi-layer (and deep) SNNs, it is generally not considered biologically plausible and is also computationally expensi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
96
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 179 publications
(97 citation statements)
references
References 48 publications
1
96
0
Order By: Relevance
“…In [118], a supervised learning method was proposed (BP-STDP) where the backpropagation update rules were converted to temporally local STDP rules for multilayer SNNs. This model achieved accuracies comparable to equal-sized conventional and spiking networks for the MNIST benchmark (see Section III-A).…”
Section: B Learning Rules In Snnsmentioning
confidence: 99%
“…In [118], a supervised learning method was proposed (BP-STDP) where the backpropagation update rules were converted to temporally local STDP rules for multilayer SNNs. This model achieved accuracies comparable to equal-sized conventional and spiking networks for the MNIST benchmark (see Section III-A).…”
Section: B Learning Rules In Snnsmentioning
confidence: 99%
“…In Mostafa (2017) [10], the use of 800 IF neurons with alpha functions complicates the neural processing and the learning procedure of the network. In Tavanaei et al (2018) [15], the network's computational cost is quite large due to the use of rate coding and 1000 hidden neurons. In Comsa et al (2019) [11], the use of complicated SRM neuron model with exponential synaptic current makes it difficult for event-based implementation.…”
Section: Mnist Datasetmentioning
confidence: 99%
“…Current SNN models for pattern recognition can be generally categorized into three classes: that is, indirect training [12,13,14,15,16,17,18,19], direct SL training with BP [11,26,20,21,22,23,53], and plasticity-based unsupervised training with supervised modules [54,24,25]. for optimal initial weights and then used current-based BP to re-train all-layer weights in a supervised way [53]; however, this also resulted in the model being bio-implausible due to the use of the BP algorithm.…”
Section: Comparison With Other Snn Modelsmentioning
confidence: 99%
“…For the indirect SL method, ANNs are first trained and then mapped to equivalent SNNs by different conversion algorithms that transform real-valued computing into spike-based computing [12,13,14,15,16,17,18,19]; however, this method does not incorporate SNN learning and therefore provides no heuristic information on how to train a SNN. The direct SL method is based on the BP algorithm [11,20,21,22,23], e.g., using membrane potentials as continuous variables for calculating errors in BP [20,23] or using continuous activity function to approximate neuronal spike activity and obtain differentiable activ-ity for the BP algorithm [11,22]. However, such research must still perform numerous real-valued computations and non-local communications during the training process; thus, BP-based methods are as potentially energy inefficient as ANNs and also lack bio-plausibility.…”
Section: Introductionmentioning
confidence: 99%