2020 IEEE International Reliability Physics Symposium (IRPS) 2020
DOI: 10.1109/irps45951.2020.9129313
|View full text |Cite
|
Sign up to set email alerts
|

Device-aware inference operations in SONOS nonvolatile memory arrays

Abstract: Non-volatile memory arrays can deploy pre-trained neural network models for edge inference. However, these systems are affected by device-level noise and retention issues. Here, we examine damage caused by these effects, introduce a mitigation strategy, and demonstrate its use in fabricated array of SONOS (Silicon-Oxide-Nitride-Oxide-Silicon) devices. On MNIST, fashion-MNIST, and CIFAR-10 tasks, our approach increases resilience to synaptic noise and drift. We also show strong performance can be realized with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…Our recurrent neural network design, newly developed for this work and visible in The RNN core crossbar is connected to a read-out/logit crossbar of dimensions D o x L, where L is the number of classes (here L = 10); critically, the second core is only activated at the final time step (every t steps). In our resilience testing strategy, we have considered two classes of pre-trained networks: noise-prepared or regularized networks, which train with jittered gaussian filters on every hidden neuron's ReLu function during training following the scheme given in [14] and standard/un-prepared networks which have been trained without noise. As first suggested in [15], the use of noise regularization provides a definitive improvement in inference (Test ) performance of models to internal and external noise or perturbation effects.…”
Section: Methodsmentioning
confidence: 99%
“…Our recurrent neural network design, newly developed for this work and visible in The RNN core crossbar is connected to a read-out/logit crossbar of dimensions D o x L, where L is the number of classes (here L = 10); critically, the second core is only activated at the final time step (every t steps). In our resilience testing strategy, we have considered two classes of pre-trained networks: noise-prepared or regularized networks, which train with jittered gaussian filters on every hidden neuron's ReLu function during training following the scheme given in [14] and standard/un-prepared networks which have been trained without noise. As first suggested in [15], the use of noise regularization provides a definitive improvement in inference (Test ) performance of models to internal and external noise or perturbation effects.…”
Section: Methodsmentioning
confidence: 99%
“…Analog hardware represents digital values in analog quantities like voltages or light pulses and performs computation in analog domain [5]. This form of computation is cheap and projects a 2X performance improvement over digital hardware in speed and energy efficiency [13], [14] as they can achieve projected throughput of multiple tera-operations (TOPs) per seconds and femto-joule energy budgets per multiply-and-accumulate (MAC) operation [4], [15]- [17].…”
Section: Introductionmentioning
confidence: 99%
“…This method of computation is cheap and projects a 2X performance improvement over digital hardware in speed and energy efficiency [13], [14]. These improvements are because they can achieve projected throughput of multiple tera-operations (TOPs) per seconds and femto-joule energy budgets per multiply-and-accumulate (MAC) operation [4], [15]- [17].…”
Section: Introductionmentioning
confidence: 99%