2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC) 2020
DOI: 10.1109/asp-dac47756.2020.9045134
|View full text |Cite
|
Sign up to set email alerts
|

When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies

Abstract: Deep Neural Network has proved its potential in various perception tasks and hence become an appealing option for interpretation and data processing in security sensitive systems. However, security-sensitive systems demand not only high perception performance, but also design robustness under various circumstances. Unlike prior works that study network robustness from software level, we investigate from hardware perspective about the impact of Single Event Upset (SEU) induced parameter perturbation (SIPP) on n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1
1

Relationship

3
7

Authors

Journals

citations
Cited by 36 publications
(14 citation statements)
references
References 27 publications
(36 reference statements)
0
14
0
Order By: Relevance
“…The data format itself obviously decides or affects the data range. [52] found out that the errors in exponent bits of the 32bit floatingpoint weights have large impacts on the performance. [23] investigated the resilience characteristics of several floatingpoint and non-dynamic fixed-point representations.…”
Section: Discussionmentioning
confidence: 99%
“…The data format itself obviously decides or affects the data range. [52] found out that the errors in exponent bits of the 32bit floatingpoint weights have large impacts on the performance. [23] investigated the resilience characteristics of several floatingpoint and non-dynamic fixed-point representations.…”
Section: Discussionmentioning
confidence: 99%
“…Fault-masking techniques are conceptually simple. We could use error-correcting codes (ECC) to protect memory elements [35] and triple-modular redundancy (TMR) to protect computational units [29,42]. However, the corresponding hardware overhead is exceptionally high.…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Different strategies have been proposed to tackle these issues. Noise-aware training [4] and uncertainty-aware neural architecture search [10][11][12] aim at fortifying DNNs so that their performance remains mostly unaffected even in the presence of device variations. However, these methods are not economical because they require re-training DNNs from scratch and cannot make use of existing pre-trained models.…”
Section: Introductionsmentioning
confidence: 99%