Proceedings of the 56th Annual Design Automation Conference 2019 2019
DOI: 10.1145/3316781.3317770
|View full text |Cite
|
Sign up to set email alerts
|

Analog/Mixed-Signal Hardware Error Modeling for Deep Learning Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
44
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 44 publications
(45 citation statements)
references
References 11 publications
1
44
0
Order By: Relevance
“…There exist many different methods of training a neural network with noise that aim to improve the resilience of the model to analog mixed-signal hardware. These include injecting additive noise on the inputs of every layer 20 , on the preactivations 22,23 , or just adding noise on the input data 47 . Moreover, injecting multiplicative Gaussian noise to the weights 34 (σ l δW tr ;ij / jW l ij j) is also defensible regarding the observed noise on the hardware.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…There exist many different methods of training a neural network with noise that aim to improve the resilience of the model to analog mixed-signal hardware. These include injecting additive noise on the inputs of every layer 20 , on the preactivations 22,23 , or just adding noise on the input data 47 . Moreover, injecting multiplicative Gaussian noise to the weights 34 (σ l δW tr ;ij / jW l ij j) is also defensible regarding the observed noise on the hardware.…”
Section: Discussionmentioning
confidence: 99%
“…As early as in 1994, it was shown that injecting noise on the synaptic weights during training enhances the tolerance to weight perturbations of multi-layer perceptrons, and the application of this technique to analog neural hardware was discussed 34 . Recent works have also proposed to apply noise to the layer inputs or pre-activations in order to improve the network tolerance to hardware noise 20,23 . In this work, we follow the original approach of Murray et al 34 of injecting Gaussian noise to the synaptic weights during training.…”
Section: Resnet Block 1 10 Layersmentioning
confidence: 99%
See 2 more Smart Citations
“…Deep neural network inference tasks, which are the designated applications for the presented IMC system, can tolerate this small reduction of precision of the MAC operation with usually no loss or in certain cases with insignificant loss in classification accuracy. The effects of ADC quantization, to which any reduced-precision implementation is subjected, are studied in detail in [27].…”
Section: F Noise and Mismatch Impactmentioning
confidence: 99%