2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2020
DOI: 10.1109/aicas48895.2020.9073998
|View full text |Cite
|
Sign up to set email alerts
|

Error-triggered Three-Factor Learning Dynamics for Crossbar Arrays

Abstract: Recent breakthroughs suggest that local, approximate gradient descent learning is compatible with Spiking Neural Networks (SNNs). Although SNNs can be scalably implemented using neuromorphic VLSI, an architecture that can learn in situ as accurately as conventional processors is still missing. Here, we propose a subthreshold circuit architecture designed through insights obtained from machine learning and computational neuroscience that could achieve such accuracy. Using a surrogate gradient learning framework… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 27 publications
(36 citation statements)
references
References 25 publications
(24 reference statements)
0
36
0
Order By: Relevance
“…This can lead to a large number of updates and inefficient implementations in hardware. To tackle this problem, updates can be made in an error-triggered fashion, as discussed in Payvand et al (2020). A direct consequence of the local classifiers is the lack of cross-layer adaptation of the layers.…”
Section: Resultsmentioning
confidence: 99%
“…This can lead to a large number of updates and inefficient implementations in hardware. To tackle this problem, updates can be made in an error-triggered fashion, as discussed in Payvand et al (2020). A direct consequence of the local classifiers is the lack of cross-layer adaptation of the layers.…”
Section: Resultsmentioning
confidence: 99%
“…While they show that the rate-based counterpart of their algorithm works on CIFAR-10 and ImageNet, it comes at the cost of employing dendritic network topologies and specialized synapses. Indeed, as noted in the introduction, most backpropagation-derived local learning rules involve a third factor for supervision ( Payvand et al, 2020 ). In this regard, we believe that our EqSpike implementation of equilibrium propagation, with only two factors, achieves an optimal trade-off between circuitry complexity and performance.…”
Section: Discussionmentioning
confidence: 99%
“…The first two take into account, as usual, the behavior of pre- and post-neurons, and the third allows for the introduction of an additional error factor. This third factor leads to implementations on neuromorphic chips that are less compact, and possibly less energy efficient, than two-factor learning rules such as STDP ( Payvand et al, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…This implies only constant [O(1)] memory overhead for learning, which significantly simplifies the neuromorphic hardware implementation. Payvand et al [109] described a DECOLLE crossbar implementation that leverages the sharing of the learning and inference signals while eliciting updates in a temporally sparse, error-driven fashion. Furthermore, the learning dynamics are potentially immune to the mismatch in the synaptic dynamics since the same signal is used for computing the forward pass and gradient dynamics.…”
Section: I M P L E M E N T a T I O N S T R A T E G I E S I N N E Umentioning
confidence: 99%
“…The gradient-based learning of SNNs induces a three-factor rule [see (12)], comprising one term to compute the loss gradient (∂L/∂S t ) and two terms for the network states (∂S t /∂θ). Payvand et al [109] exploited this factorization in a neuromorphic design comprising two types of cores: processing cores and neuromorphic cores. Processing cores are general-purpose processors that compute the loss gradients and neuromorphic cores compute the network states and their gradients.…”
Section: I M P L E M E N T a T I O N S T R A T E G I E S I N N E Umentioning
confidence: 99%