2018 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2018
DOI: 10.23919/date.2018.8341970
|View full text |Cite
|
Sign up to set email alerts
|

MATIC: Learning around errors for efficient low-voltage neural network accelerators

Abstract: As a result of the increasing demand for deep neural network (DNN)-based services, efforts to develop dedicated hardware accelerators for DNNs are growing rapidly. However, while accelerators with high performance and efficiency on convolutional deep neural networks (Conv-DNNs) have been developed, less progress has been made with regards to fullyconnected DNNs (FC-DNNs). In this paper, we propose MATIC (Memory Adaptive Training with In-situ Canaries), a methodology that enables aggressive voltage scaling of a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 50 publications
(55 citation statements)
references
References 22 publications
0
53
0
Order By: Relevance
“…The verification of the simulation-based works on the real fabric can be a crucial concern; also, the real hardware works are mostly performed on the customized ASICs, which of course, reproducing those results on the COTS systems is a crucial question. On the other hand, there are not thorough efforts on the resilience of the DNN training phase; recent works in part cover the study in this area [24], [25], [40]- [42]. For instance, [41], [42] have analyzed only the fully-connected model of DNNs, [24] carried out the analysis on a customized ASIC model of the DNN, and finally, [25] performed a simulationbased study.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The verification of the simulation-based works on the real fabric can be a crucial concern; also, the real hardware works are mostly performed on the customized ASICs, which of course, reproducing those results on the COTS systems is a crucial question. On the other hand, there are not thorough efforts on the resilience of the DNN training phase; recent works in part cover the study in this area [24], [25], [40]- [42]. For instance, [41], [42] have analyzed only the fully-connected model of DNNs, [24] carried out the analysis on a customized ASIC model of the DNN, and finally, [25] performed a simulationbased study.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, there are not thorough efforts on the resilience of the DNN training phase; recent works in part cover the study in this area [24], [25], [40]- [42]. For instance, [41], [42] have analyzed only the fully-connected model of DNNs, [24] carried out the analysis on a customized ASIC model of the DNN, and finally, [25] performed a simulationbased study. Our paper extends the study on the resilience of the DNN training, especially by using the fault maps of lowvoltage SRAM-based on-chip memories of real FPGA fabrics.…”
Section: Related Workmentioning
confidence: 99%
“…Once the network has been quantized and compressed, we can further leverage resource versus correctness tradeoffs by storing the weights in approximate SRAM [142], which occasionally produces read errors. Recent work [96] shows that correct retraining and fault detection mechanisms can mitigate the negative effects of SRAM read upsets on classification tasks.…”
Section: Example: Digit Recognitionmentioning
confidence: 99%
“…In memories, aggressive voltage scaling is one of the most prominent approaches which can lead to significant efficiency gains. Towards this, Kim et al [33] proposed MATIC, a memory-adaptive training approach that enables aggressive voltage scaling of ac- 1 Weight Stationary: maximize convolutional reuse and filter reuse, Output Stationary: maximize partial sum accumulation and input feature map reuse. Row Stationary: maximize all these parameters.…”
Section: Hardware-level Optimizationsmentioning
confidence: 99%