Phase change memory can provide a remarkable artificial synapse for neuromorphic systems, as it features excellent reliability and can be used as an analog memory. However, this approach is complicated by the fact that crystallization and amorphization differ radically: crystallization can be realized in a very gradual manner, very similarly to synaptic potentiation, while the amorphization process tends to be abrupt, unlike synaptic depression. Addressing this non‐biorealism of amorphization requires system‐level solutions that have considerable energy cost or limit the generality of the approach. This work demonstrates experimentally that an adaptation of the memory structure associated with an initialization electrical pulse followed by a sequence of identical fast pulses can overcome this challenge. A single device can then naturally implement gradual long‐term potentiation and depression, much like synapses in biology. This study evidences through statistical measurements the reproducibility of the approach, discusses its physical origin, as well as the importance of the device architecture and of the initial electrical pulse. Through the use of system‐level simulation, it is shown that this device is especially adapted to a neuroscience‐inspired learning. These results highlight how nanodevices can be suitable for bioinspired applications while retaining the qualities of industrial technology.
Recurrent neural networks are currently subject to intensive research efforts to solve temporal computing problems. Neuromorphic processors (NPs), composed of networked neuron and synapse circuit models, natively compute in time and offer an ultralow power solution particularly suited to emerging temporal edge-computing applications (wearable medical devices, for example). The most significant roadblock to addressing useful problems with neuromorphic hardware is the difficulty in maintaining healthy network dynamics in recurrent neural networks. In animal nervous systems, this is achieved via a multitude of adaptive homeostatic mechanisms which act over multiple time scales to counteract network instability induced via drift, component failure, or learning processes such as spike-timing dependent plasticity. One such mechanism is neuronal intrinsic plasticity (IP) where a neuron adapts its parameters which govern its excitability to fire around a target rate. The approach employed in state of the art NPs, based on a central volatile memory remotely setting model parameters, critically constrains parameter variety and bandwidth rendering realization of these essential mechanisms impossible. This paper demonstrates how reconfigurable nonvolatile resistive memories can be incorporated into neuron and synapse circuits allowing memory to be truly colocalized with the computational units in the computing fabric and facilitating the realization of massively parallel local plasticity mechanisms in neuromorphic hardware. Exploiting nonconventional programming operations of HfO2 based RRAM (stochastic SET and the RESET random variable), we propose a technologically plausible IP algorithm and demonstrate its use in the case of a recurrent neural network topology whereby the system self-organizes to sustain stable and healthy network dynamics around a target firing rate.
Spiking neural networks (SNNs) are a computational tool in which the information is coded into spikes, as in some parts of the brain, differently from conventional neural networks (NNs) that compute over real-numbers. Therefore, SNNs can implement intelligent information extraction in real-time at the edge of data acquisition and correspond to a complementary solution to conventional NNs working for cloud-computing. Both NN classes face hardware constraints due to limited computing parallelism and separation of logic and memory. Emerging memory devices, like resistive switching memories, phase change memories, or memristive devices in general are strong candidates to remove these hurdles for NN applications. The well-established training procedures of conventional NNs helped in defining the desiderata for memristive device dynamics implementing synaptic units. The generally agreed requirements are a linear evolution of memristive conductance upon stimulation with train of identical pulses and a symmetric conductance change for conductance increase and decrease. Conversely, little work has been done to understand the main properties of memristive devices supporting efficient SNN operation. The reason lies in the lack of a background theory for their training. As a consequence, requirements for NNs have been taken as a reference to develop memristive devices for SNNs. In the present work, we show that, for efficient CMOS/memristive SNNs, the requirements for synaptic memristive dynamics are very different from the needs of a conventional NN. System-level simulations of a SNN trained to classify hand-written digit images through a spike timing dependent plasticity protocol are performed considering various linear and non-linear plausible synaptic memristive dynamics. We consider memristive dynamics bounded by artificial hard conductance values and limited by the natural dynamics evolution toward asymptotic values (soft-boundaries). We quantitatively analyze the impact of resolution and non-linearity properties of the synapses on the network training and classification performance. Finally, we demonstrate that the non-linear synapses with hard boundary values enable higher classification performance and realize the best trade-off between classification accuracy and required training time. With reference to the obtained results, we discuss how memristive devices with non-linear dynamics constitute a technologically convenient solution for the development of on-line SNN training.
Resistive switching memories (RRAMs) have attracted wide interest as adaptive synaptic elements in artificial bio-inspired Spiking Neural Networks (SNNs). These devices suffer from high cycle-to-cycle and cell-to-cell conductance variability, which is usually considered as a big challenge. However, biological synapses are noisy devices and the brain seems in some situations to benefit from the noise. It has been predicted that RRAM-based SNNs are intrinsically robust to synaptic variability. Here, we investigate this robustness based on extensive characterization data: we analyze the role of noise during unsupervised learning by Spike-Timing Dependent Plasticity (STDP) for detection in dynamic input data and classification of static input data. Extensive characterizations of multi-kilobits HfO 2-based Oxide-based RAM (OxRAM) arrays under different programming conditions are presented. We identify the trade-offs between programming conditions, power consumption, conductance variability and endurance features. Finally, the experimental results are used to perform systemlevel simulations fully calibrated on the experimental data. The results demonstrate that, similarly to biology, SNNs are not only robust to noise but a certain amount of noise can even improve the network performance. OxRAM conductance variability increases the range of synaptic values explored during the learning process. Moreover, the reduction of constraints on the OxRAM conductance variability allows the system to operate at low power programming conditions.
Resistive Memory (RRAM)-based Ternary Content Addressable Memories (TCAMs) were developed to reduce cell area, search energy and standby power consumption beyond what can be achieved by SRAM-based TCAMs. In previous works, RRAM-based TCAMs have already been fabricated, but the impact of RRAM reliability on TCAM performance has never been proven until now. In this work, we fabricated and extensively tested a RRAMbased TCAM circuit. We show that a trade-off exists between search latency and reliability in terms of match/mismatch detection and search/read endurance, and that a RRAM-based TCAM is an ideal building block in multi-core neuromorphic architectures. These ones would not be affected by long latency time and limited write endurance, and could greatly benefit from their high-density and zero standby power consumption.
Resistive Random Access Memories (RRAMs) are a promising solution to implement Ternary Content Addressable Memories (TCAMs) that are more area-and energy-efficient with respect to Static Random Access Memory (SRAM)-based TCAMs. However, RRAM-based TCAMs are limited in the number of bits per word due to the low ratio between the resistances of the high and low resistance states (HRS/LRS) and resistance variability of RRAM. Such a limitation on the word length hinders the parallel search of a very large number of data bits for data-intensive applications. To overcome this issue, for the first time, we propose a new TCAM cell composed of two transistors and two RRAMs in a 1T2R1T configuration, where a RRAM voltage divider (2R) biases a transistor gate (1T) and an additional transistor is used to program the RRAMs (1T). A 3x128bits 1T2R1T TCAM macro were designed, integrated and extensively characterized. We experimentally demonstrate that the sensing margin of the proposed structure is insensitive to HRS/LRS RRAM resistance ratio and variability. With respect to the most common type of 2T2R RRAM-based TCAM [1-3], the proposed circuit improves the sensing margin by >5000x while reaching search times of 0.93ns. This allows the search of large volumes of data in parallel. In addition, the proposed structure improves programming and search endurance by 100x and >10x, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.