2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD) 2021
DOI: 10.1109/iccad51958.2021.9643500
|View full text |Cite
|
Sign up to set email alerts
|

A Design Flow for Mapping Spiking Neural Networks to Many-Core Neuromorphic Hardware

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

4
5

Authors

Journals

citations
Cited by 25 publications
(9 citation statements)
references
References 50 publications
0
9
0
Order By: Relevance
“…While traditional von Neumann architectures have one or more central processing units physically separated from the main memory, neuromorphic architectures exploit the co-localization of memory and compute, near and in-memory computation [18]. Simultaneously to the tremendous progress in devising novel neuromorphic computing architectures, there has been many recent works that address how to map and compile (trained) SNNs models for efficient execution in neuromorphic hardware [19][20][21][22][23][24][25][26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…While traditional von Neumann architectures have one or more central processing units physically separated from the main memory, neuromorphic architectures exploit the co-localization of memory and compute, near and in-memory computation [18]. Simultaneously to the tremendous progress in devising novel neuromorphic computing architectures, there has been many recent works that address how to map and compile (trained) SNNs models for efficient execution in neuromorphic hardware [19][20][21][22][23][24][25][26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…In [192], Song et al propose a complete design flow (roughly based on the design flow proposed for embedded multiprocessor systems [193,182,194].) for mapping throughput-constrained SNN applications to a neuromorphic hardware.…”
Section: Application and Hardware Modeling For Predictable Performanc...mentioning
confidence: 99%
“…PSOPART [27] minimizes spike latency on the shared interconnect, SpiNeMap [9] minimizes interconnect energy, DFSynthesizer [82] maximizes throughput, DecomposedSNN [11] maximizes crossbar utilization, EaNC [90] minimizes overall energy of a machine learning task by targeting both computation and communication energy, TaNC [89] minimizes the average temperature of each crossbar, eSpine [91] maximizes NVM endurance in a crossbar, RENEU [80] minimizes the circuit aging in a crossbar's peripheral circuits, and NCil [86] reduces read disturb issues in a crossbar, improving the inference lifetime. Beside these techniques, there are also other software frameworks [1,3,4,6,12,23,25,38,47,50,54,60,71,75,76,78,85,88] and run-time approaches [10,84], addressing one or more of these optimization objectives.…”
Section: Hardware Implementation Of Machine Learning Inferencementioning
confidence: 99%