In recent years, artificial neural networks have become the flagship algorithm of artificial intelligence 1 . In these systems, neuron activation functions are static and computing is achieved through standard arithmetic operations. By contrast, a prominent branch of neuroinspired computing embraces the dynamical nature of the brain and proposes to endow each component of a neural network with dynamical functionality, such as oscillations, and to rely on emergent physical phenomena, such as synchronization 2-7 , for computing complex problems with small size networks [7][8][9][10][11] . This approach is especially interesting for hardware implementations, as emerging nanoelectronic devices can provide highly compact and energy-efficient non-linear auto-oscillators that mimic the periodic spiking activity of biological neurons [12][13][14][15][16] . The dynamical couplings between oscillators can then be used to mediate the synaptic communication between neurons. However, one major challenge towards implementing these models with nano-devices is to achieve learning, which requires finely controlling and tuning their coupled oscillations 17 . The dynamical features of nanodevices can indeed be difficult to control, and prone to noise and variability 18 . In this work, we show that the outstanding tunability of spintronic nano-oscillators, i.e. the possibility to widely and accurately control their frequency through electrical current and magnetic field, can solve this challenge. We successfully train a hardware network of four spin-torque nano-oscillators to recognize spoken vowels by tuning their frequencies according to an automatic real-time learning rule. We show that the high experimental recognition rates stem from the outstanding ability of these oscillators to synchronize. Our results demonstrate that non-trivial pattern classification tasks can be achieved with small hardware neural networks by endowing them with non-linear dynamical features: here, oscillations and synchronization. This demonstration of real-time learning with an array of four spin-torque nano-oscillators is a milestone for spintronics-based neuromorphic computing.Spin-torque nano-oscillators are natural candidates for building hardware neural networks made of coupled nanoscale oscillators [8][9][10]13,15,18,19 . These nanoscale magnetic tunnel junctions emit microwave
We investigate extremely low programming current and fast switching time of a perpendicular tunnel-magnetoresistance (P-TMR) for spin-transfer torque using a P-TMR cell of 50nm-diameter. A L1 0 -crystalline ordered alloy is used as a free layer that has excellent thermal stability and a damping constant of about 0.03. The programming current of 49 uA and the switching time of 4 nsec are also demonstrated. IntroductionRecently, magnetoresistive random access memory (MRAM) based on spin-transfer torque (STT) switching has been intensively developed as a most promising non-volatile random access memory. The reasons are the good scalability, good non-volatility, and fast-switching time (1). The STT switching of a TMR element with perpendicular magnetic anisotropy has attracted considerable attention in recent years (2-7), because of small cell size of 6F 2 and the lower programming current than that of a TMR element with in-plane shape magnetic anisotropy (I-TMR) (2). L1 0 -crystalline ordered alloy like FePt is one of candidates for the P-TMR because of large anisotropic energy Ku of order of 10 7 erg/cc and high thermal stability (3).In this paper, we have designed and fabricated a P-TMR element using L1 0 -crystalline ordered alloy as the free layer and successfully demonstrated low-current and fast switching time. We fabricated 1kbit array of P-TMR elements to show memory performance.
The figures-of-merit for reservoir computing (RC), using spintronics devices called magnetic tunnel junctions (MTJs), are evaluated. RC is a type of recurrent neural network. The input information is stored in certain parts of the reservoir, and computation can be performed by optimizing a linear transform matrix for the output. While all the network characteristics should be controlled in a general recurrent neural network, such optimization is not necessary for RC. The reservoir only has to possess a non-linear response with memory effect. In this paper, macromagnetic simulation is conducted for the spin-dynamics in MTJs, for reservoir computing. It is determined that the MTJ-system possesses the memory effect and non-linearity required for RC. With RC using 5-7MTJs, high performance can be obtained, similar to an echo-state network with 20-30 nodes, even if there are no magnetic and/or electrical interactions between the magnetizations. I. INTRODUCTIONThe magnetization direction of ferromagnetic metallic film is determined by the magnetic anisotropy energy, which causes non-volatility. This property can be used for magnetic random-access memory devices [1]. In magnetic tunnel junction (MTJ) devices consisting of ferromagnetic and dielectric thin films, the magnetization direction in the ferromagnet can be detected by the change in device resistance originating from the tunneling magnetoresistance (TMR) effect [2][3][4][5]. Moreover, the magnetization direction can be electrically controlled by the spin-torque [6-9]. Therefore, MTJ devices are suitable for constructing non-volatile high-density memory devices. In addition to such long-term memory effect, the magnetization precessional dynamics appear to possess short-term memory effect with non-linear behavior. Such additional magnetization dynamics properties may be suitable for computation using MTJ devices.The recurrent neural network (RNN) [10, 11] is a machine learning method. It is a mathematical model, which emulates the nerve system in human brain. The RNN concept is depicted in Fig. 1(a).The model consists of three layers, input, middle (node), and output. In the RNN, the information of the middle layer recursively propagates in itself. The middle-layer state is determined by the present input and past middle-layer state, i.e., the middle layer in the RNN possesses memory effect. All the weight matrices for the input (Win), middle (W) and output (Wout) should be precisely trained to obtain the desired output. However, when the middle layer has sufficient memory effect and non-linearity, it is feasible to perform computation by optimizing only the output matrix (Wout). This type of simple RNN is called reservoir computing (RC) [12][13][14]. In RC, as system training is simple, it
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.