2018
DOI: 10.48550/arxiv.1802.02627
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(19 citation statements)
references
References 0 publications
0
19
0
Order By: Relevance
“…It is important to set the neuronal firing thresholds sufficiently high so that each spiking neuron can closely resemble ANN activation without loss of information. In the literature, several methods have been proposed [4,8,14,38,35] for balancing appropriate ratios between neuronal thresholds and synaptic weights of spiking neuron in the case of ANN-SNN conversion. In this paper, we compare various aspects of our direct-spike trained models with one recent work [38], which proposed a near-lossless ANN-SNN conversion scheme for deep network architectures.…”
Section: Ann-snn Conversion Schemementioning
confidence: 99%
See 2 more Smart Citations
“…It is important to set the neuronal firing thresholds sufficiently high so that each spiking neuron can closely resemble ANN activation without loss of information. In the literature, several methods have been proposed [4,8,14,38,35] for balancing appropriate ratios between neuronal thresholds and synaptic weights of spiking neuron in the case of ANN-SNN conversion. In this paper, we compare various aspects of our direct-spike trained models with one recent work [38], which proposed a near-lossless ANN-SNN conversion scheme for deep network architectures.…”
Section: Ann-snn Conversion Schemementioning
confidence: 99%
“…This mechanism enables event-based and asynchronous computations across the layers on spiking systems, which makes it naturally suitable for ultra-low power computing. Furthermore, recent works [38,35] have shown that these properties make SNNs significantly more attractive for deeper networks in the case of hardware implementation. This is because the spike signals become significantly sparser as the layer goes deeper, such that the number of required computations significantly reduces.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…SNNs were proven to be effective in several problems but it remained less competitive compared to CNNs. Recently (Sengupta et al, 2018) demonstrated that a deep SNN can achieve better accuracy than CNN on a challenging dataset ImageNet. A detailed overview of deep learning in SNN is discussed in (Tavanaei et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Unlike ANNs, SNNs are hybrid digital-analog machines that make use of the temporal dimension, not just as a neutral substrate for computing, but as a means to encode and process information [7]. Training methods for SNNs typically assume deterministic non-linear dynamic models for the spiking neurons, and are either motivated by biological plausibility, such as the spiketiming-dependent plasticity (STDP) rule [5], [8], or by an attempt to mimic the operation of ANNs and associated learning rules (see, e.g., [9] and references therein). Deterministic models are known to be limited in their expressive power, especially as it pertains prior domain knowledge, uncertainty, and definition of generic queries and tasks.…”
Section: Introductionmentioning
confidence: 99%