Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information1. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of nanoscale non-linear oscillators. Indeed, a simple estimation indicates that, in order to fit a hundred million oscillators organized in a two-dimensional array inside a chip the size of a thumb, their lateral dimensions must be smaller than one micrometer. However, despite multiple theoretical proposals2–5, and several candidates such as memristive6 or superconducting7 oscillators, there is no proof of concept today of neuromorphic computing with nano-oscillators. Indeed, nanoscale devices tend to be noisy and to lack the stability required to process data in a reliable way. Here, we show experimentally that a nanoscale spintronic oscillator8,9 can achieve spoken digit recognition with accuracies similar to state of the art neural networks. We pinpoint the regime of magnetization dynamics leading to highest performance. These results, combined with the exceptional ability of these spintronic oscillators to interact together, their long lifetime, and low energy consumption, open the path to fast, parallel, on-chip computation based on networks of oscillators.
Spin-torque nano-oscillators can emulate neurons at the nanoscale. Recent works show that the non-linearity of their oscillation amplitude can be leveraged to achieve waveform classification for an input signal encoded in the amplitude of the input voltage. Here we show that the frequency and the phase of the oscillator can also be used to recognize waveforms. For this purpose, we phase-lock the oscillator to the input waveform, which carries information in its modulated frequency. In this way we considerably decrease amplitude, phase and frequency noise. We show that this method allows classifying sine and square waveforms with an accuracy above 99% when decoding the output from the oscillator amplitude, phase or frequency. We find that recognition rates are directly related to the noise and non-linearity of each variable. These results prove that spin-torque nano-oscillators offer an interesting platform to implement different computing schemes leveraging their rich dynamical features.
The recent demonstration of neuromorphic computing with spin-torque nano-oscillators has opened a path to energy efficient data processing. The success of this demonstration hinged on the intrinsic short-term memory of the oscillators. In this study, we extend the memory of the spin-torque nanooscillators through time-delayed feedback. We leverage this extrinsic memory to increase the efficiency of solving pattern recognition tasks that require memory to discriminate different inputs.The large tunability of these non-linear oscillators allows us to control and optimize the delayed feedback memory using different operating conditions of applied current and magnetic field. I.
Fabricating powerful neuromorphic chips the size of a thumb requires miniaturizing their basic units: synapses and neurons. The challenge for neurons is to scale them down to submicrometer diameters while maintaining the properties that allow for reliable information processing: high signal to noise ratio, endurance, stability, reproducibility. In this work, we show that compact spin-torque nano-oscillators can naturally implement such neurons, and quantify their ability to realize an actual cognitive task. In particular, we show that they can naturally implement reservoir computing with high performance and detail the recipes for this capability.
Deep learning has an increasing impact to assist research, allowing, for example, the discovery of novel materials. Until now, however, these artificial intelligence techniques have fallen short of discovering the full differential equation of an experimental physical system. Here we show that a dynamical neural network, trained on a minimal amount of data, can predict the behavior of spintronic devices with high accuracy and an extremely efficient simulation time, compared to the micromagnetic simulations that are usually employed to model them. For this purpose, we re-frame the formalism of Neural Ordinary Differential Equations to the constraints of spintronics: few measured outputs, multiple inputs and internal parameters. We demonstrate with Neural Ordinary Differential Equations an acceleration factor over 200 compared to micromagnetic simulations for a complex problem – the simulation of a reservoir computer made of magnetic skyrmions (20 minutes compared to three days). In a second realization, we show that we can predict the noisy response of experimental spintronic nano-oscillators to varying inputs after training Neural Ordinary Differential Equations on five milliseconds of their measured response to a different set of inputs. Neural Ordinary Differential Equations can therefore constitute a disruptive tool for developing spintronic applications in complement to micromagnetic simulations, which are time-consuming and cannot fit experiments when noise or imperfections are present. Our approach can also be generalized to other electronic devices involving dynamics.
the reservoir computing neural network architecture is widely used to test hardware systems for neuromorphic computing. one of the preferred tasks for bench-marking such devices is automatic speech recognition. this task requires acoustic transformations from sound waveforms with varying amplitudes to frequency domain maps that can be seen as feature extraction techniques. Depending on the conversion method, these transformations sometimes obscure the contribution of the neuromorphic hardware to the overall speech recognition performance. Here, we quantify and separate the contributions of the acoustic transformations and the neuromorphic hardware to the speech recognition success rate. We show that the non-linearity in the acoustic transformation plays a critical role in feature extraction. We compute the gain in word success rate provided by a reservoir computing device compared to the acoustic transformation only, and show that it is an appropriate benchmark for comparing different hardware. Finally, we experimentally and numerically quantify the impact of the different acoustic transformations for neuromorphic hardware based on magnetic nano-oscillators. Artificial neural network algorithms outperform humans on recognition tasks like image or speech recognition, by leveraging deep networks of interconnected non-linear units called formal neurons 1. The goal of neural networks is to extract the features and classify input data through learned non-linear transformation. Running such algorithms on a classical computer is costly energetically: to overcome this issue, neuromorphic approaches 2,3 propose to implement them physically. In particular, reservoir computing 4,5 is a kind of recurrent neural network that has been widely used to test the efficiency of hardware for neuromorphic computing 6-8 because it has a simplified architecture and learning procedure. The input is sent to a neural network with fixed recurrent connections called a reservoir. The goal of the reservoir is to separate the different kinds of inputs, such that after this transformation, the classification can be done by a linear transformation. The response of the neurons of the reservoir are combined linearly with trained connections to construct the output. Since the connections in the reservoir are random and fixed, it is easier to fabricate it in hardware and then train the output connections, often emulated in software, with linear regression. Speech recognition is a widely used class of benchmark tasks performed to test the efficiency of a neural network. It is especially employed in the case of reservoir computing because the recurrent connections of the reservoir create an intrinsic memory that is useful to classify time-varying inputs. Generally, this task requires frequency decomposition 9-11 prior to the neural network because the acoustic features are contained in the frequency rather than in the amplitude of the time-varying signal. These decompositions return the amplitude of the signal in different frequency channels as a funct...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.