2021
DOI: 10.3389/fnins.2021.756876
|View full text |Cite
|
Sign up to set email alerts
|

SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training

Abstract: Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
2

Relationship

3
7

Authors

Journals

citations
Cited by 34 publications
(23 citation statements)
references
References 36 publications
0
17
0
Order By: Relevance
“…For example, Refs. [45] and [46] apply periodic scheduling to strided sparse convolutional SNNs and spike-time based learning rules, respectively, both to highly competitive results. We aim to decouple the effect of emerging training algorithms from that of best deep learning Various learning rate schedules which were evaluated.…”
Section: Periodic Lr Schedulesmentioning
confidence: 99%
“…For example, Refs. [45] and [46] apply periodic scheduling to strided sparse convolutional SNNs and spike-time based learning rules, respectively, both to highly competitive results. We aim to decouple the effect of emerging training algorithms from that of best deep learning Various learning rate schedules which were evaluated.…”
Section: Periodic Lr Schedulesmentioning
confidence: 99%
“…Unsupervised methods utilize spike-timing-dependent plasticity rule (STDP), but they are limited to the shallow SNN structure with a few layers and yield much lower accuracy than ANNs on complex datasets (e.g., only 66.23% on CIFAR-10 (Srinivasan and Roy 2019)). On the other hand, supervised methods represented by error backpropagation with surrogate functions can achieve better performance than the unsupervised ones, but they still can not provide compatible results with ANNs in large-scale datasets (Liu et al 2021a).…”
Section: Introductionmentioning
confidence: 99%
“…Nonetheless, the handcrafted features do not guarantee accurate preservation of semantic similarities of raw image pairs, resulting in degraded performances in the subsequent hash function learning process. Deep learning, which learns fine-grained features in an end-to-end fashion by complex deep neural networks, has become the dominant approach among the computer vision community [35][36][37][38][39][40]. Deep learning-based [15,55] methods generally achieve significant performance improvements when compared to their shallow counterparts.…”
Section: Introductionmentioning
confidence: 99%