2021
DOI: 10.1109/ojcas.2020.3035402
|View full text |Cite
|
Sign up to set email alerts
|

MOSDA: On-Chip Memory Optimized Sparse Deep Neural Network Accelerator With Efficient Index Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
0
0
0
Order By: Relevance
“…The benefits of having a sparse predictive model are fully exploited only when the model is being executed on a hardware accelerator that can process sparsified models. In the field of CNN accelerators this has been heavily exploited, resulting in numerous solutions being proposed [45][46][47][48][49]. Surprisingly, in the field of DTs, SVMs and ANNs, only a handful of hardware accelerators capable of directly processing sparse models have been proposed in [44,50], despite the obvious benefits of accelerating sparse ML predictive models.…”
Section: Introductionmentioning
confidence: 99%
“…The benefits of having a sparse predictive model are fully exploited only when the model is being executed on a hardware accelerator that can process sparsified models. In the field of CNN accelerators this has been heavily exploited, resulting in numerous solutions being proposed [45][46][47][48][49]. Surprisingly, in the field of DTs, SVMs and ANNs, only a handful of hardware accelerators capable of directly processing sparse models have been proposed in [44,50], despite the obvious benefits of accelerating sparse ML predictive models.…”
Section: Introductionmentioning
confidence: 99%