Abstract:We present a novel deep neural network (DNN) training scheme and RRAM in-memory computing (IMC) hardware evaluation towards achieving high robustness to the RRAM device/array variations and adversarial input attacks. We present improved IMC inference accuracy results evaluated on state-of-the-art DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. These DNNs are evaluated with measured noise data obtained from three different RRAM-based… Show more
“…IMC architectures are known for their improved energy efficiency and throughput, but they have some drawbacks. One such drawback is the limited precision of the IMC crossbar array, particularly in the memory cell and ADC, which can affect the accuracy of DNN inference [74,75]. Additionally, noise within analog computation can also harm DNN inference accuracy.…”
Section: Challenges With Imc Architecturesmentioning
confidence: 99%
“…In [76], VAT is combined with dynamic precision quantization to mitigate the post-mapping accuracy loss. Another approach proposed in [75] involves injecting RRAM macro measurement results that include variability and noise during the DNN training process to improve the DNN accuracy of the RRAM IMC hardware. Mohanty et al [88] proposes post-mapping training by selecting a random subset of weights and mapping them to an on-chip memory to recover the accuracy.…”
Section: Block Diagram Of a Sram-based Imc Crossbar Array An Array Of...mentioning
In-memory computing (IMC)-based hardware reduces latency and energy consumption for compute-intensive machine learning (ML) applications. Several SRAM/RRAM-based IMC hardware architectures to accelerate ML applications have been proposed in the literature. However, crossbar-based IMC hardware poses several design challenges. We first discuss the different ML algorithms recently adopted in the literature. We then discuss the hardware implications of ML algorithms. Next, we elucidate the need for IMC architecture and the different components within a conventional IMC architecture. After that, we introduce the need for 2.5D or chiplet-based architectures. We then discuss the different benchmarking simulators proposed for monolithic IMC architectures. Finally, we describe an end-to-end chiplet-based IMC benchmarking simulator, SIAM.
“…IMC architectures are known for their improved energy efficiency and throughput, but they have some drawbacks. One such drawback is the limited precision of the IMC crossbar array, particularly in the memory cell and ADC, which can affect the accuracy of DNN inference [74,75]. Additionally, noise within analog computation can also harm DNN inference accuracy.…”
Section: Challenges With Imc Architecturesmentioning
confidence: 99%
“…In [76], VAT is combined with dynamic precision quantization to mitigate the post-mapping accuracy loss. Another approach proposed in [75] involves injecting RRAM macro measurement results that include variability and noise during the DNN training process to improve the DNN accuracy of the RRAM IMC hardware. Mohanty et al [88] proposes post-mapping training by selecting a random subset of weights and mapping them to an on-chip memory to recover the accuracy.…”
Section: Block Diagram Of a Sram-based Imc Crossbar Array An Array Of...mentioning
In-memory computing (IMC)-based hardware reduces latency and energy consumption for compute-intensive machine learning (ML) applications. Several SRAM/RRAM-based IMC hardware architectures to accelerate ML applications have been proposed in the literature. However, crossbar-based IMC hardware poses several design challenges. We first discuss the different ML algorithms recently adopted in the literature. We then discuss the hardware implications of ML algorithms. Next, we elucidate the need for IMC architecture and the different components within a conventional IMC architecture. After that, we introduce the need for 2.5D or chiplet-based architectures. We then discuss the different benchmarking simulators proposed for monolithic IMC architectures. Finally, we describe an end-to-end chiplet-based IMC benchmarking simulator, SIAM.
“…While this work performs experiments using noise models obtained from various SRAM arrays, it still lacks a full-fledged hardware demonstration. Cherupally et al ( 2021 ) obtain a noise model of a RRAM-based crossbar often found in analog in-memory computing architectures. The noise model is then used in simulation to study the adversarial robustness of neural networks deployed on such a crossbar.…”
Event-based dynamic vision sensors provide very sparse output in the form of spikes, which makes them suitable for low-power applications. Convolutional spiking neural networks model such event-based data and develop their full energy-saving potential when deployed on asynchronous neuromorphic hardware. Event-based vision being a nascent field, the sensitivity of spiking neural networks to potentially malicious adversarial attacks has received little attention so far. We show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data, and demonstrate smaller perturbation magnitudes at higher success rates than the current state-of-the-art algorithms. For the first time, we also verify the effectiveness of these perturbations directly on neuromorphic hardware. Finally, we discuss the properties of the resulting perturbations, the effect of adversarial training as a defense strategy, and future directions.
“…Read et al [ 10 ] reverse engineered the weights and biases of DNN models mapped on analog CIM systems. Cherupally et al [ 11 ] studied adversarial attacks on analog RRAM‐based CIM systems. In this study, we analyze CIM privacy breach vulnerabilities by reconstructing users’ private input data from power side‐channel profiling of CIM systems.…”
Analog compute‐in‐memory (CIM) systems are promising candidates for deep neural network (DNN) inference acceleration. However, as the use of DNNs expands, protecting user input privacy has become increasingly important. Herein, a potential security vulnerability is identified wherein an adversary can reconstruct the user's private input data from a power side‐channel attack even without knowledge of the stored DNN model. An attack approach using a generative adversarial network is developed to achieve high‐quality data reconstruction from power leakage measurements. The analyses show that the attack methodology is effective in reconstructing user input data from power leakage of the analog CIM accelerator, even at large noise levels and after countermeasures. To demonstrate the efficacy of the proposed approach, an example of CIM inference of U‐Net for brain tumor detection is attacked, and the original magnetic resonance imaging medical images can be successfully reconstructed even at a noise level of 20% standard deviation of the maximum power signal value. This study highlights a potential security vulnerability in emerging analog CIM accelerators and raises awareness of needed safety features to protect user privacy in such systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.