Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design 2022
DOI: 10.1145/3531437.3539729
|View full text |Cite
|
Sign up to set email alerts
|

Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 14 publications
1
5
0
Order By: Relevance
“…Moreover, SNNs appear to be energy-efficient alternatives to ANNs, due to their brain-like computations and communications using sparse 1-bit spiking activations [13]. Besides, the robustness of ANNs and SNNs to noisy synaptic weights have been compared [12], [14]. However, authors in [12] do not consider a realistic hardware model, as weights are simulated with 32-bit floating point precision and only one type of error is considered.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, SNNs appear to be energy-efficient alternatives to ANNs, due to their brain-like computations and communications using sparse 1-bit spiking activations [13]. Besides, the robustness of ANNs and SNNs to noisy synaptic weights have been compared [12], [14]. However, authors in [12] do not consider a realistic hardware model, as weights are simulated with 32-bit floating point precision and only one type of error is considered.…”
Section: Introductionmentioning
confidence: 99%
“…However, authors in [12] do not consider a realistic hardware model, as weights are simulated with 32-bit floating point precision and only one type of error is considered. A more realistic hardware model of a RRAM crossbar implementation for evaluating the robustness of SNNs and ANNs is presented in an other work [14]. Nevertheless, none of these works [11], [12], [14] consider the benefits of injecting noise during training, which has proven very effective to enhance the fault tolerance of neural networks [15].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…[3]. Motivated by this, the robustness analysis and adversarial defense in AccSNNs have been thoroughly investigated in several recent works [4]- [7]. Very recently, in [8], the authors showed that approximate DNNs (AxDNNs) are more prone to adversarial attacks as compared to accurate DNNs (AccDNN).…”
Section: Introductionmentioning
confidence: 99%
“…A few works have investigated the impact of 'benign' nonideal properties in RRAM-based hardware on adversary attacks toward image classification. One work discusses how the nonideal properties reduce the adversary attack success rate and concludes that the RRAM-based neuromorphic hardware is inherently robust against adversary attacks (Bhattacharjee and Panda, 2020). While a recent work points out that the hard faults in the RRAM crossbar array can be leveraged to substantially enhance the adversary attack strength and effectively breakthrough software defense strategy (Lv et al, 2021).…”
Section: Introductionmentioning
confidence: 99%