The development of an efficient neuromorphic computing system requires the use of nanodevices that intrinsically emulate the biological behavior of neurons and synapses. While numerous artificial synapses have been shown to store weights in a manner analogous to biological synapses, the challenge of developing an artificial neuron is impeded by the necessity to include leaking, integrating, firing, and lateral inhibition features. In particular, previous proposals for artificial neurons have required the use of external circuits to perform lateral inhibition, thereby decreasing the efficiency of the resulting neuromorphic computing system. This work therefore proposes a leaky integrate-andfire neuron that intrinsically provides lateral inhibition, without requiring any additional circuitry. The proposed neuron is based on the previously proposed domain-wall magnetic tunnel junction devices, which have been proposed as artificial synapses and experimentally demonstrated for nonvolatile logic. Single-neuron micromagnetic simulations are provided that demonstrate the ability of this neuron to implement the required leaking, integrating, and firing. These simulations are then extended to pairs of adjacent neurons to demonstrate, for the first time, lateral inhibition between neighboring artificial neurons. Finally, this intrinsic lateral inhibition is applied to a ten-neuron crossbar structure and trained to identify handwritten digits and shown via direct large-scale micromagnetic simulation for 100 digits to correctly identify the proper signal for 94% of the digits.
In neuromorphic computing, artificial synapses provide a multi‐weight (MW) conductance state that is set based on inputs from neurons, analogous to the brain. Herein, artificial synapses based on magnetic materials that use a magnetic tunnel junction (MTJ) and a magnetic domain wall (DW) are explored. By fabricating lithographic notches in a DW track underneath a single MTJ, 3–5 stable resistance states that can be repeatably controlled electrically using spin‐orbit torque are achieved. The effect of geometry on the synapse behavior is explored, showing that a trapezoidal device has asymmetric weight updates with high controllability, while a rectangular device has higher stochasticity, but with stable resistance levels. The device data is input into neuromorphic computing simulators to show the usefulness of application‐specific synaptic functions. Implementing an artificial neural network (NN) applied to streamed Fashion‐MNIST data, the trapezoidal magnetic synapse can be used as a metaplastic function for efficient online learning. Implementing a convolutional NN for CIFAR‐100 image recognition, the rectangular magnetic synapse achieves near‐ideal inference accuracy, due to the stability of its resistance levels. This work shows MW magnetic synapses are a feasible technology for neuromorphic computing and provides design guidelines for emerging artificial synapse technologies.
Black phosphorus (BP) is a promising two-dimensional (2D) material for nanoscale transistors, due to its expected higher mobility than other 2D semiconductors. While most studies have reported ambipolar BP with a stronger p-type transport, it is important to fabricate both unipolar p- and n-type transistors for low-power digital circuits. Here, we report unipolar n-type BP transistors with low work function Sc and Er contacts, demonstrating a record high n-type current of 200 μA/μm in 6.5 nm thick BP. Intriguingly, the electrical transport of the as-fabricated, capped devices changes from ambipolar to n-type unipolar behavior after a month at room temperature. Transmission electron microscopy analysis of the contact cross-section reveals an intermixing layer consisting of partly oxidized metal at the interface. This intermixing layer results in a low n-type Schottky barrier between Sc and BP, leading to the unipolar behavior of the BP transistor. This unipolar transport with a suppressed p-type current is favorable for digital logic circuits to ensure a lower off-power consumption.
CMOS-based computing systems that employ the von Neumann architecture are relatively limited when it comes to parallel data storage and processing. In contrast, the human brain is a living computational signal processing unit that operates with extreme parallelism and energy efficiency. Although numerous neuromorphic electronic devices have emerged in the last decade, most of them are rigid or contain materials that are toxic to biological systems. In this work, we report on biocompatible bilayer graphene-based artificial synaptic transistors (BLAST) capable of mimicking synaptic behavior. The BLAST devices leverage a dry ion-selective membrane, enabling long-term potentiation, with ~50 aJ/µm2 switching energy efficiency, at least an order of magnitude lower than previous reports on two-dimensional material-based artificial synapses. The devices show unique metaplasticity, a useful feature for generalizable deep neural networks, and we demonstrate that metaplastic BLASTs outperform ideal linear synapses in classic image classification tasks. With switching energy well below the 1 fJ energy estimated per biological synapse, the proposed devices are powerful candidates for bio-interfaced online learning, bridging the gap between artificial and biological neural networks.
We investigate the valley Hall effect (VHE) in monolayer WSe2 field-effect transistors using optical Kerr rotation measurements at 20 K. While studies of the VHE have so far focused on n-doped MoS2, we observe the VHE in WSe2 in both the n- and p-doping regimes. Hole doping enables access to the large spin-splitting of the valence band of this material. The Kerr rotation measurements probe the spatial distribution of the valley carrier imbalance induced by the VHE. Under current flow, we observe distinct spin-valley polarization along the edges of the transistor channel. From analysis of the magnitude of the Kerr rotation, we infer a spin-valley density of 44 spins/μm, integrated over the edge region in the p-doped regime. Assuming a spin diffusion length less than 0.1 μm, this corresponds to a spin-valley polarization of the holes exceeding 1%.
Magnetic skyrmions are exciting candidates for energy-efficient computing due to their nonvolatility, detectability, and mobility. A recent proposal within the paradigm of reversible computing enables large-scale circuits composed of directly cascaded skyrmion logic gates, but it is limited by the manufacturing difficulty and energy costs associated with the use of notches for skyrmion synchronization. To overcome these challenges, we, therefore, propose a skyrmion logic synchronized via modulation of voltage-controlled magnetic anisotropy (VCMA). In addition to demonstrating the principle of VCMA synchronization through micromagnetic simulations, we also quantify the impacts of current density, skyrmion velocity, and anisotropy barrier height on skyrmion motion. Further micromagnetic results demonstrate the feasibility of cascaded logic circuits in which VCMA synchronizers enable clocking and pipelining, illustrating a feasible pathway toward energy-efficient large-scale computing systems based on magnetic skyrmions.
Inspired by the parallelism and efficiency of the brain, several candidates for artificial synapse devices have been developed for neuromorphic computing, yet a nonlinear and asymmetric synaptic response curve precludes their use for backpropagation, the foundation of modern supervised learning. Spintronic devices—which benefit from high endurance, low power consumption, low latency, and CMOS compatibility—are a promising technology for memory, and domain-wall magnetic tunnel junction (DW-MTJ) devices have been shown to implement synaptic functions such as long-term potentiation and spike-timing dependent plasticity. In this work, we propose a notched DW-MTJ synapse as a candidate for supervised learning. Using micromagnetic simulations at room temperature, we show that notched synapses ensure the non-volatility of the synaptic weight and allow for highly linear, symmetric, and reproducible weight updates using either spin transfer torque (STT) or spin–orbit torque (SOT) mechanisms of DW propagation. We use lookup tables constructed from micromagnetics simulations to model the training of neural networks built with DW-MTJ synapses on both the MNIST and Fashion-MNIST image classification tasks. Accounting for thermal noise and realistic process variations, the DW-MTJ devices achieve classification accuracy close to ideal floating-point updates using both STT and SOT devices at room temperature and at 400 K. Our work establishes the basis for a magnetic artificial synapse that can eventually lead to hardware neural networks with fully spintronic matrix operations implementing machine learning.
There are pressing problems with traditional computing, especially for accomplishing data-intensive and real-time tasks, that motivate the development of in-memory computing devices to both store information and perform computation 1 . Magnetic tunnel junction (MTJ) memory elements can be used for computation by manipulating a domain wall (DW), a transition region between magnetic domains. Three leading device types that use MTJs and DWs for in-memory computing are majority logic 2-4 , mLogic 5-9 , and DW-MTJs 10-14 . But, these devices have suffered from challenges: spin transfer torque (STT) switching of a DW requires high current, and the multiple etch steps needed to create an MTJ pillar on top of a DW track has led to reduced tunnel magnetoresistance (TMR) 15-16 . These issues have limited experimental study of devices and circuits. Here, we study prototypes of three-terminal domain wall-magnetic tunnel junction (DW-MTJ) in-memory computing devices that can address data processing bottlenecks and resolve these challenges by using perpendicular magnetic anisotropy (PMA), spin-orbit torque (SOT) switching, and an optimized lithography process to produce average device tunnel magnetoresistance TMR = 164%, resistance-area product RA = 31 𝛀 − 𝝁𝒎 𝟐 , close to the RA of the unpatterned film, and lower switching current density compared to using spin transfer torque. A two-device circuit shows bit propagation between devices. Device initialization variation in switching voltage is shown to be curtailed to 7% by controlling the DW initial position, which we show corresponds to 96% accuracy in a DW-MTJ full adder simulation. These results make strides in using MTJs and DWs for in-memory and neuromorphic computing applications.Computing today faces walls when processing data-intensive and unstructured tasks. Memory access in modern computers can dominate as much as 96% of computing time 17 . SRAM idle leakage can consume over 20% of the total power of a computation 18,19 . For internet of things
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.