2019
DOI: 10.48550/arxiv.1912.11443
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast and energy-efficient neuromorphic deep learning with first-spike times

Abstract: For a biological agent operating under environmental pressure, energy consumption and reaction times are of critical importance. Similarly, engineered systems also strive for short time-to-solution and low energy-to-solution characteristics. At the level of neuronal implementation, this implies achieving the desired results with as few and as early spikes as possible. In the time-to-first-spike coding framework, both of these goals are inherently emerging features of learning. Here, we describe a rigorous deri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 29 publications
0
16
0
Order By: Relevance
“…Despite the efforts to improve the efficiency of TTFS coding in deep SNNs [17,36], their improvements have been restricted by the conversionbased training algorithms. In several studies, SNNs were directly trained, but their methods were not validated as applicable to deep SNNs [12,37,38]. A recent study suggested direct training methods…”
Section: Training Methods Of Deep Snnsmentioning
confidence: 99%
“…Despite the efforts to improve the efficiency of TTFS coding in deep SNNs [17,36], their improvements have been restricted by the conversionbased training algorithms. In several studies, SNNs were directly trained, but their methods were not validated as applicable to deep SNNs [12,37,38]. A recent study suggested direct training methods…”
Section: Training Methods Of Deep Snnsmentioning
confidence: 99%
“…We have generalized this method to include an exact, closed-form expression for finite membrane time constants [10], [11] and applied it to a 3-layer network emulated on BrainScaleS-2 (Fig. 2).…”
Section: I I E X P E R I M E N T Smentioning
confidence: 99%
“…This duration scales proportionally to the chosen synaptic and membrane time constants, which in our case were set to 5 µs. Taking into consideration relaxation times between patterns, our setup is able to handle a pattern throughput of at least 20 kHz, independently of emulated network size [11].…”
Section: I I E X P E R I M E N T Smentioning
confidence: 99%
“…Although the HICANN architecture has been successfully used to implement deep multi-layer networks using rate-based spiking models [34] and back-propagation based training, is looses some of its power-efficiency by emulating a Perceptron model. Encoding the activation in the time between spikes can enhance the efficiency significantly [35]. In all spiking solutions the network operates in continuous time and therefore the size of the network is limited to the number of neurons and synapses available on the chip.…”
Section: Analog Inference: Rate-based Extension Of Hicann-xmentioning
confidence: 99%