2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 2019
DOI: 10.1109/iccad45719.2019.8942127
|View full text |Cite
|
Sign up to set email alerts
|

MAGNet: A Modular Accelerator Generator for Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 94 publications
(61 citation statements)
references
References 20 publications
0
61
0
Order By: Relevance
“…MAGNet [148] is a Modular Accelerator Generator for Neural Networks that consists of the following three modules. (1) A MAGNet Designer, that, given a neural is a design space exploration tool for both training and inference accelerators.…”
Section: Tools For Design Space Exploration (Dse)mentioning
confidence: 99%
“…MAGNet [148] is a Modular Accelerator Generator for Neural Networks that consists of the following three modules. (1) A MAGNet Designer, that, given a neural is a design space exploration tool for both training and inference accelerators.…”
Section: Tools For Design Space Exploration (Dse)mentioning
confidence: 99%
“…There are a number of hardware architectures found in the literature that aim to provide acceleration for CNN applications while reducing computational redundancies [ 14 , 15 , 16 , 17 ]. And, there are some approaches that attempt to exploit the high bandwidth available near the sensor interface by bringing the computation closer to the image sensor [ 7 ].…”
Section: Related Workmentioning
confidence: 99%
“…Recent work reduces the number of multiplications in the compute array with the Winograd or FFT algorithm [48]. An accelerator generator of a single computation engine using high-level synthesis (HLS) is proposed in [21]. In [49][50], a technique to utilize on-chip memories for CNNs with branches such as ResNet or GoogLeNet was proposed and speedup was achieved by utilizing the limited on-chip memories for memory bounded layers.…”
Section: Cnn Accelerator Architecturesmentioning
confidence: 99%