2019
DOI: 10.3389/fnins.2019.00095
|View full text |Cite
|
Sign up to set email alerts
|

Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

Abstract: Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
673
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 739 publications
(683 citation statements)
references
References 23 publications
4
673
0
2
Order By: Relevance
“…From the algorithmic perspective, SNN computing models are a significant shift from the traditional deep non-spiking networks (current de-facto standard) due to the additional time domain encoding of information. Hence, classification accuracies provided by such networks are still limited than their non-spiking counterparts 7 . Further, it is unclear whether emerging neuromorphic devices based on spintronics, resistive memories, phase-change memories would still exhibit multi-level characteristics at aggressively scaled device dimensions (the key characteristic being leveraged in these non-volatile devices).…”
mentioning
confidence: 99%
“…From the algorithmic perspective, SNN computing models are a significant shift from the traditional deep non-spiking networks (current de-facto standard) due to the additional time domain encoding of information. Hence, classification accuracies provided by such networks are still limited than their non-spiking counterparts 7 . Further, it is unclear whether emerging neuromorphic devices based on spintronics, resistive memories, phase-change memories would still exhibit multi-level characteristics at aggressively scaled device dimensions (the key characteristic being leveraged in these non-volatile devices).…”
mentioning
confidence: 99%
“…To compare the energy efficiency of ANN and its equivalent SNN implementation, we follow the convention from NC community and compute the total synaptic operations SynOps that required to perform a certain task (Merolla et al, 2014;Rueckauer et al, 2017;Sengupta et al, 2019). For ANN, the total synaptic operations (Multiply-and-Accumulate (MAC)) per classification is defined as follows…”
Section: Energy Efficiency: Counting Synaptic Operationsmentioning
confidence: 99%
“…Recently, considerable research efforts are devoted to addressing this problem and the resulting learning rules can be broadly categorized into the SNN-to-ANN conversion (Cao et al, 2015;Diehl et al, 2015), back-propagation through time with surrogate gradient (Neftci et al, 2019; and tandem learning . Despite several successful attempts on the large-scale image classification tasks with deep SNNs (Rueckauer et al, 2017;Hu et al, 2018;Sengupta et al, 2019;Wu et al, 2019), their applications to the large-vocabulary continuous ASR (LVCSR) tasks remain unexplored. In this work, we explore an SNN-based acoustic model for LVCSR using a recently proposed tandem learning rule that supports an efficient and rapid inference.…”
Section: Introductionmentioning
confidence: 99%
“…Some recent work on SNN has focussed on converting images from ANN benchmarks to spike trains and then classifying them [16], [17]. While being great pieces of research, we feel that this is fundamentally not a good application for SNN since the original input signal is static and does not change with time.…”
Section: Neuromorphic Benchmarksmentioning
confidence: 99%