2020
DOI: 10.48550/arxiv.2011.11840
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Benchmarking Inference Performance of Deep Learning Models on Analog Devices

Abstract: Analog hardware implemented deep learning models are promising for computation and energy constrained systems such as edge computing devices. However, the analog nature of the device and the associated many noise sources will cause changes to the value of the weights in the trained deep learning models deployed on such devices. In this study, systematic evaluation of the inference performance of trained popular deep learning models for image classification deployed on analog devices has been carried out, where… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Our realistic PCM model has different noise mean and variance for each input. Using this model is an improvement over prior work that only investigate white Gaussian noise (Upadhyaya et al, 2019;Zhou et al, 2020;Fagbohungbe & Qian, 2020) when storing NN models on noisy storage.…”
Section: Phase Change Memory (Pcm)mentioning
confidence: 99%
See 1 more Smart Citation
“…Our realistic PCM model has different noise mean and variance for each input. Using this model is an improvement over prior work that only investigate white Gaussian noise (Upadhyaya et al, 2019;Zhou et al, 2020;Fagbohungbe & Qian, 2020) when storing NN models on noisy storage.…”
Section: Phase Change Memory (Pcm)mentioning
confidence: 99%
“…To the best of our knowledge, this is the first work on NN compression for analog storage devices with realistic noise characteristics, unlike previous work that only investigate white Gaussian noise (Upadhyaya et al, 2019;Zhou et al, 2020;Fagbohungbe & Qian, 2020). In particular, our contributions are:…”
Section: Introductionmentioning
confidence: 99%
“…We then added AWGN to the well-trained NN weights under a given SNR and used different estimators to denoise the noisy weights. A caveat here is that the parameters of the batch-normalization and the layer-normalization layers were set to be noise-free in the experiments [6,8], because they behave differently than other parameters [20]. These parameters are few in number.…”
Section: A Implementation Detailsmentioning
confidence: 99%