Proceedings of the 2019 Great Lakes Symposium on VLSI 2019
DOI: 10.1145/3299874.3317966
|View full text |Cite
|
Sign up to set email alerts
|

A Systolic SNN Inference Accelerator and its Co-optimized Software Framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…While SNNs have gathered siginiicant research interest as promising bio-inspired models of computation, only a very few works have been done on SNN hardware accelerators [15,24,38,39,45], particularly array-based accelerators [1,6,23,49,52] due to the fact that the spatiotemporal nature of the spikes make it diicult to design an eicient architecture. Importantly, these limited existing works have primarily focused on feedforward SNNs where there exist very limited works that are capable of executing R-SNNs [2,17,37].…”
Section: R-snn Acceleratorsmentioning
confidence: 99%
See 1 more Smart Citation
“…While SNNs have gathered siginiicant research interest as promising bio-inspired models of computation, only a very few works have been done on SNN hardware accelerators [15,24,38,39,45], particularly array-based accelerators [1,6,23,49,52] due to the fact that the spatiotemporal nature of the spikes make it diicult to design an eicient architecture. Importantly, these limited existing works have primarily focused on feedforward SNNs where there exist very limited works that are capable of executing R-SNNs [2,17,37].…”
Section: R-snn Acceleratorsmentioning
confidence: 99%
“…Importantly, these limited existing works have primarily focused on feedforward SNNs where there exist very limited works that are capable of executing R-SNNs [2,17,37]. For example, [1,23,49,52] introduced a systolic array based accelerator for spiking-CNNs. However, these works are only targeting feedforward networks where eicient method to handle recurrence, which produces tightly-coupled data dependency in both time and space, has not been proposed.…”
Section: R-snn Acceleratorsmentioning
confidence: 99%
“…Spatial architectures couple PEs in such away that they can exchange intermediate results without having to access a central memory [5]. Typical implementations use either fixed data path connections between the individual PEs (Systolic Arrays) [10] or Networkon-Chips (NoCs) that feature a highly flexible packet-based interconnect [9], [11]. Systolic arrays are excellent for performing convolutions in cases where dataflow is easily predictable, i.e., in low sparsity situations [20].…”
Section: Introductionmentioning
confidence: 99%