2021
DOI: 10.1038/s42256-021-00388-x
|View full text |Cite
|
Sign up to set email alerts
|

Fast and energy-efficient neuromorphic deep learning with first-spike times

Abstract: For a biological agent operating under environmental pressure, energy consumption and reaction times are of critical importance. Similarly, engineered systems are optimized for short time-to-solution and low energy-to-solution characteristics. At the level of neuronal implementation, this implies achieving the desired results with as few and as early spikes as possible. With time-to-first-spike coding both of these goals are inherently emerging features of learning.Here, we describe a rigorous derivation of a … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
51
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(57 citation statements)
references
References 70 publications
0
51
0
Order By: Relevance
“…In recent years, deep learning algorithms are a breakthrough in the field of machine learning research [ 9 , 10 ]. At the beginning of the 21st century, Canadian scholars put forward the concept of deep learning and started the research boom of deep learning [ 11 , 12 ]. After years of development, deep learning technology has achieved outstanding performance in the fields of computer vision, natural language processing, and speech recognition [ 13 ].…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, deep learning algorithms are a breakthrough in the field of machine learning research [ 9 , 10 ]. At the beginning of the 21st century, Canadian scholars put forward the concept of deep learning and started the research boom of deep learning [ 11 , 12 ]. After years of development, deep learning technology has achieved outstanding performance in the fields of computer vision, natural language processing, and speech recognition [ 13 ].…”
Section: Introductionmentioning
confidence: 99%
“…• Time-to-first spike (Göltz et al, 2021) • Surrogate-Gradient-Based Learning (Cramer et al, 2022) • Analog ANN training (Weis et al, 2020) They differ in which measurements are necessary and what model of the physical system is used. In the time-to-first spike gradient-based training scheme, which we won't discuss in detail here, the essential idea is that it is possible to compute the derivative of the spike time with respect to input weights based on an analytical expression of the spike time.…”
Section: Gradient-based Learning Approachesmentioning
confidence: 99%
“…In contrast to the time-to-first spike approach (Göltz et al, 2021 ), this does not require explicit or analytical knowledge of the function t ⋆ ( x, p ) and is also applicable to more complex neuron models. In the context of spiking neural networks, this was recognized by Wunderlich and Pehle ( 2020 ) and elaborated in full generality by Pehle ( 2021 ).…”
Section: A Principled Approach To Gradient-based Parameter Optimizati...mentioning
confidence: 99%
“…Larger system sizes will skew this comparison further to favor of BrainScaleS-2, which can even implement more densely connected network topologies without incurring a performance penalty. We also note that the BrainScaleS-2 chip requires less than 500 mW [37,38], while the Intel Xeon E5-2630v4 has a thermal design power (TDP) of 85 W for 10 cores. As such BrainScaleS-2 is using comparable energy even for the smallest systems we implemented in the prototype system used in the main manuscript.…”
Section: B Computation Time Benchmark For Sampling From Neural Networkmentioning
confidence: 99%