In recent years, artificial neural networks have become the flagship algorithm of artificial intelligence 1 . In these systems, neuron activation functions are static and computing is achieved through standard arithmetic operations. By contrast, a prominent branch of neuroinspired computing embraces the dynamical nature of the brain and proposes to endow each component of a neural network with dynamical functionality, such as oscillations, and to rely on emergent physical phenomena, such as synchronization 2-7 , for computing complex problems with small size networks [7][8][9][10][11] . This approach is especially interesting for hardware implementations, as emerging nanoelectronic devices can provide highly compact and energy-efficient non-linear auto-oscillators that mimic the periodic spiking activity of biological neurons [12][13][14][15][16] . The dynamical couplings between oscillators can then be used to mediate the synaptic communication between neurons. However, one major challenge towards implementing these models with nano-devices is to achieve learning, which requires finely controlling and tuning their coupled oscillations 17 . The dynamical features of nanodevices can indeed be difficult to control, and prone to noise and variability 18 . In this work, we show that the outstanding tunability of spintronic nano-oscillators, i.e. the possibility to widely and accurately control their frequency through electrical current and magnetic field, can solve this challenge. We successfully train a hardware network of four spin-torque nano-oscillators to recognize spoken vowels by tuning their frequencies according to an automatic real-time learning rule. We show that the high experimental recognition rates stem from the outstanding ability of these oscillators to synchronize. Our results demonstrate that non-trivial pattern classification tasks can be achieved with small hardware neural networks by endowing them with non-linear dynamical features: here, oscillations and synchronization. This demonstration of real-time learning with an array of four spin-torque nano-oscillators is a milestone for spintronics-based neuromorphic computing.Spin-torque nano-oscillators are natural candidates for building hardware neural networks made of coupled nanoscale oscillators [8][9][10]13,15,18,19 . These nanoscale magnetic tunnel junctions emit microwave
In neuroscience, population coding theory demonstrates that neural assemblies can achieve fault-tolerant information processing. Mapped to nanoelectronics, this strategy could allow for reliable computing with scaled-down, noisy, imperfect devices. Doing so requires that the population components form a set of basis functions in terms of their response functions to inputs, offering a physical substrate for computing. Such a population can be implemented with CMOS technology, but the corresponding circuits have high area or energy requirements. Here, we show that nanoscale magnetic tunnel junctions can instead be assembled to meet these requirements. We demonstrate experimentally that a population of nine junctions can implement a basis set of functions, providing the data to achieve, for example, the generation of cursive letters. We design hybrid magnetic-CMOS systems based on interlinked populations of junctions and show that they can learn to realize non-linear variability-resilient transformations with a low imprint area and low power.
Brains perform intelligent tasks with extremely low energy consumption, probably to a large extent because they merge computation and memory entirely, and because they function using low precision computation. The emergence of resistive memory technologies provides an opportunity to integrate logic and memory tightly in hardware. In parallel, the recently proposed concept of Binarized Neural Network, where multiplications are replaced by exclusive NOR logic gates, shows a way to implement artificial intelligence using very low precision computation. In this work, we therefore propose a strategy to implement low energy Binarized Neural Networks, which employs these two ideas, while retaining energy benefits from digital electronics. We design, fabricate and test a memory array, including periphery and sensing circuits, optimized for this in-memory computing scheme. Our circuit employs hafnium oxide resistive memory integrated in the back end of line of a 130 nanometer CMOS process, in a two transistors -two resistors cell, which allows performing the exclusive NOR operations of the neural network directly within the sense amplifiers. We show, based on extensive electrical measurements, that our design allows reducing the amount of bit errors on the synaptic weights, without the use of formal error correcting codes. We design a whole system using this memory array. We show on standard machine learning tasks (MNIST, CIFAR-10, ImageNet and an ECG task) that the system has an inherent resilience to bit errors. We evidence that its energy consumption is attractive with regards to more standard approaches, and that it can use the memory devices in regimes where they exhibit particularly low programming energy and high endurance. We conclude the work by discussing its associations between biological plausible ideas and more traditional digital electronics concepts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.