An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
We present artificial neural network design using spin devices that achieves ultra low voltage operation, low power consumption, high speed, and high integration density. We employ spin torque switched nano-magnets for modelling neuron and domain wall magnets for compact, programmable synapses. The spin based neuron-synapse units operate locally at ultra low supply voltage of 30mV resulting in low computation power. CMOS based inter-neuron communication is employed to realize network-level functionality. We corroborate circuit operation with physics based models developed for the spin devices. Simulation results for character recognition as a benchmark application shows 95% lower power consumption as compared to 45nm CMOS design.
Recently several device and circuit design techniques have been explored for applying nano-magnets and spin torque devices like spin valves and domain wall magnets in computational hardware. However, most of them have been focused on digital logic, and, their benefits over robust and high performance CMOS remains debatable. Ultra-low voltage, current-mode operation of magneto-metallic spin torque devices can potentially be more suitable for non-Boolean logic like neuromorphic computation, which involve analog processing. Device circuit co-design for different classes of neuromorphic architectures using spin-torque based neuron models along with DWM or other memristive synapses show that the spin-based neuromorphic designs can achieve 15X-100X lower computation energy for applications like, image processing, data-conversion, cognitive-computing, associative memory and programmablelogic, as compared to state of art CMOS designs.
A simulation framework that can comprehend the impact of material changes from the device level to the system level design can be of great value, especially to evaluate the impact of emerging devices on various applications. To that effect, we developed a SPICE-based hybrid magnetic tunnel junction (MTJ)/CMOS simulator, which can be used to explore new opportunities in large scale system design. In the proposed simulation framework, MTJ modeling is based on Landau-Lifshitz-Gilbert (LLG) equation incorporating both spin-torque and external magnetic field(s). LLG, along with heat diffusion equation, thermal variations, and electron transport, is implemented using SPICE-inbuilt voltage-dependent current sources and capacitors. The proposed simulation framework is flexible because the physical device parameters such as MgO thickness, ferromagnet material anisotropy (K u ), and device dimensions are user-defined parameters. Furthermore, we benchmarked this model with experiments in terms of switching current density ( J C ), switching time (T SWITCH ), and tunneling magnetoresistance. Finally, we used the simulation framework to study different MTJ structures, such as in-plane magnet anisotropy and perpendicular magnet anisotropy, the impact of parametric process variations and temperature on the yield of spin transfer torque magnetoresistive random access memories, magnetic flip-flops, and spin-torque oscillators.Index Terms-Compact model, hybrid design, magnetic flip-flops (MFF), magnetic tunnel junction (MTJ), simulation framework, SPICE, spin-torque oscillators (STO), spin transfer torque magnetoresistive random access memory (STT-MRAM).
Spin-Torque Transfer Magnetic RAM (STT MRAM) is a promising candidate for future universal memory. It combines the desirable attributes of current memory technologies such as SRAM, DRAM and flash memories. It also solves the key drawbacks of conventional MRAM technology: poor scalability and high write current. In this paper, we analyzed and modeled the failure probabilities of STT MRAM cells due to parameter variations. Based on the model, we developed an efficient simulation tool to capture the coupled electro/magnetic dynamics of spintronic device, leading to effective prediction for memory yield. We also developed a statistical optimization methodology to minimize the memory failure probability. The proposed methodology can be used at an early stage of the design cycle to enhance memory yield.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.