2022
DOI: 10.1073/pnas.2109194119
|View full text |Cite
|
Sign up to set email alerts
|

Surrogate gradients for analog neuromorphic computing

Abstract: To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 70 publications
(61 citation statements)
references
References 55 publications
(62 reference statements)
0
57
0
Order By: Relevance
“…Subsequent work applied this training procedure to other neuron models or used other pseudoderivatives (Bellec et al, 2018;Neftci et al, 2019). The in-the-loop training approach was realized on the BrainScaleS platform by Schmitt et al (2017) for rate-based models and (Cramer et al, 2022) for SNNs employing individual spikes for information transfer and processing, as summarized in Section 3.3.1. Petrovici et al (2016) related stochastically stimulated and recurrently connected populations of LIF neurons in the highconductance state to Restricted Boltzmann Machines.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Subsequent work applied this training procedure to other neuron models or used other pseudoderivatives (Bellec et al, 2018;Neftci et al, 2019). The in-the-loop training approach was realized on the BrainScaleS platform by Schmitt et al (2017) for rate-based models and (Cramer et al, 2022) for SNNs employing individual spikes for information transfer and processing, as summarized in Section 3.3.1. Petrovici et al (2016) related stochastically stimulated and recurrently connected populations of LIF neurons in the highconductance state to Restricted Boltzmann Machines.…”
Section: Discussionmentioning
confidence: 99%
“…• Time-to-first spike (Göltz et al, 2021) • Surrogate-Gradient-Based Learning (Cramer et al, 2022) • Analog ANN training (Weis et al, 2020) They differ in which measurements are necessary and what model of the physical system is used. In the time-to-first spike gradient-based training scheme, which we won't discuss in detail here, the essential idea is that it is possible to compute the derivative of the spike time with respect to input weights based on an analytical expression of the spike time.…”
Section: Gradient-based Learning Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…1 for an illustration). This is made possible by combining standard backpropagation with recently developed surrogate gradient methods to train deep SNNs [19,20,21,22,23,24]. The advantage of the hybrid approach is that early layers can operate in the extremely efficient event-based computing paradigm, which can run on special purpose hardware implementations [25,26,27,28,29,30].…”
Section: Introductionmentioning
confidence: 99%