Phase change memory can provide a remarkable artificial synapse for neuromorphic systems, as it features excellent reliability and can be used as an analog memory. However, this approach is complicated by the fact that crystallization and amorphization differ radically: crystallization can be realized in a very gradual manner, very similarly to synaptic potentiation, while the amorphization process tends to be abrupt, unlike synaptic depression. Addressing this non‐biorealism of amorphization requires system‐level solutions that have considerable energy cost or limit the generality of the approach. This work demonstrates experimentally that an adaptation of the memory structure associated with an initialization electrical pulse followed by a sequence of identical fast pulses can overcome this challenge. A single device can then naturally implement gradual long‐term potentiation and depression, much like synapses in biology. This study evidences through statistical measurements the reproducibility of the approach, discusses its physical origin, as well as the importance of the device architecture and of the initial electrical pulse. Through the use of system‐level simulation, it is shown that this device is especially adapted to a neuroscience‐inspired learning. These results highlight how nanodevices can be suitable for bioinspired applications while retaining the qualities of industrial technology.
Recurrent neural networks are currently subject to intensive research efforts to solve temporal computing problems. Neuromorphic processors (NPs), composed of networked neuron and synapse circuit models, natively compute in time and offer an ultralow power solution particularly suited to emerging temporal edge-computing applications (wearable medical devices, for example). The most significant roadblock to addressing useful problems with neuromorphic hardware is the difficulty in maintaining healthy network dynamics in recurrent neural networks. In animal nervous systems, this is achieved via a multitude of adaptive homeostatic mechanisms which act over multiple time scales to counteract network instability induced via drift, component failure, or learning processes such as spike-timing dependent plasticity. One such mechanism is neuronal intrinsic plasticity (IP) where a neuron adapts its parameters which govern its excitability to fire around a target rate. The approach employed in state of the art NPs, based on a central volatile memory remotely setting model parameters, critically constrains parameter variety and bandwidth rendering realization of these essential mechanisms impossible. This paper demonstrates how reconfigurable nonvolatile resistive memories can be incorporated into neuron and synapse circuits allowing memory to be truly colocalized with the computational units in the computing fabric and facilitating the realization of massively parallel local plasticity mechanisms in neuromorphic hardware. Exploiting nonconventional programming operations of HfO2 based RRAM (stochastic SET and the RESET random variable), we propose a technologically plausible IP algorithm and demonstrate its use in the case of a recurrent neural network topology whereby the system self-organizes to sustain stable and healthy network dynamics around a target firing rate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.