Proceedings of the 56th Annual Design Automation Conference 2019 2019
DOI: 10.1145/3316781.3317908
|View full text |Cite
|
Sign up to set email alerts
|

Sensitivity based Error Resilient Techniques for Energy Efficient Deep Neural Network Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(13 citation statements)
references
References 6 publications
0
13
0
Order By: Relevance
“…The MAC encountering timing error steals a clock cycle from the downstream MAC to recompute the correct output and bypasses the downstream MAC's output with its output. Choi et al [64] demonstrate a methodology to enhance the resilience of the DNN accelerator based on the sensitivity variations of neurons. The technique detects an error in the multiplier unit by augmenting each MAC unit with a Razor Flip-Flop [36] between multiplier and accumulator unit.…”
Section: Enhancements Around Architecturementioning
confidence: 99%
“…The MAC encountering timing error steals a clock cycle from the downstream MAC to recompute the correct output and bypasses the downstream MAC's output with its output. Choi et al [64] demonstrate a methodology to enhance the resilience of the DNN accelerator based on the sensitivity variations of neurons. The technique detects an error in the multiplier unit by augmenting each MAC unit with a Razor Flip-Flop [36] between multiplier and accumulator unit.…”
Section: Enhancements Around Architecturementioning
confidence: 99%
“…Previous work [26,33] incorporates circuit-level error detection techniques with architectural error bypassing/masking to mitigate the effect of timing and SRAM errors in voltage underscaled DNN accelerators. The irregular resilience behavior of DNN weights and activations is noted in [3,27], and the described characteristic is utilized to improve the resilience of neural network processing systems through reliability-aware resource allocation techniques. An error-correcting output coding scheme is utilized in [19] to enhance the self-correcting capability of neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…An impacted weight value in a neuron or convolutional filter propagates into the entire channel of activations. 3 If the threshold activations are chosen from the adjacent channels, even the entire channel of anomalous activations in the output feature map could be easily filtered; thus, the weight error resilience of the network could be boosted in both fully connected and convolutional layers.…”
Section: Complementing Variable Suppression Withmentioning
confidence: 99%
See 1 more Smart Citation
“…(E-mail: jy1989@mail.tsinghua.edu.cn). popular targets for voltage scaling seeking energy savings [30], [4], [10]. While effective, voltage scaling has the disadvantage of changing circuit delay, which causes timing errors that can lead to degradation of application quality.…”
Section: Introductionmentioning
confidence: 99%