Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.convolutional network | neuromorphic | neural network | TrueNorth T he human brain is capable of remarkable acts of perception while consuming very little energy. The dream of brain-inspired computing is to build machines that do the same, requiring high-accuracy algorithms and efficient hardware to run those algorithms. On the algorithm front, building on classic work on backpropagation (1), the neocognitron (2), and convolutional networks (3), deep learning has made great strides in achieving human-level performance on a wide range of recognition tasks (4). On the hardware front, building on foundational work on silicon neural systems (5), neuromorphic computing, using novel architectural primitives, has recently demonstrated hardware capable of running 1 million neurons and 256 million synapses for extremely low power (just 70 mW at real-time operation) (6). Bringing these approaches together holds the promise of a new generation of embedded, real-time systems, but first requires reconciling key differences in the structure and operation between contemporary algorithms and hardware. Here, we introduce and demonstrate an approach we call Eedn, energy-efficient deep neuromorphic networks, which creates convolutional networks whose connections, neurons, and weights have been adapted to run inference tasks on neuromorphic hardware.For structure, typical convolutional networks place no constraints on filter sizes, whereas neuromorphic systems can take advantage of blockwise connectivity that limits filter sizes, thereby saving energy because weights can now be stored in local on-chip memory within dedicated neural cores. Here, we present a convolutional network structure that naturally maps to the efficient connection primitives used in contemporary neuromorphic systems. We enforce this connectivity constraint by partitioning filters into multiple groups and yet maintain network integra...
Abstract-The grand challenge of neuromorphic computation is to develop a flexible brain-like architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of the human brain-within the constraints of existing silicon and post-silicon technologies. To this end, we fabricated a key building block of a modular neuromorphic architecture, a neurosynaptic core, with 256 digital integrate-and-fire neurons and a 1024×256 bit SRAM crossbar memory for synapses using IBM's 45nm SOI process. Our fully digital implementation is able to leverage favorable CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. In contrast to a conventional von Neumann architecture, our core tightly integrates computation (neurons) alongside memory (synapses), which allows us to implement efficient fan-out (communication) in a naturally parallel and event-driven manner, leading to ultra-low active power consumption of 45pJ/spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and is thus amenable to a wide range of applications. As an example, we trained a restricted Boltzmann machine offline to perform a visual digit recognition task, and mapped the learned weights to our chip.
We present an approach to design spiking silicon neurons based on dynamical systems theory. Dynamical systems theory aids in choosing the appropriate level of abstraction, prescribing a neuron model with the desired dynamics while maintaining simplicity. Further, we provide a procedure to transform the prescribed equations into subthreshold current-mode circuits. We present a circuit design example, a positive-feedback integrate-and-fire neuron, fabricated in 0.25 μm CMOS. We analyze and characterize the circuit, and demonstrate that it can be configured to exhibit desired behaviors, including spike-frequency adaptation and two forms of bursting. Index TermsNeuromorphic engineering; silicon neuron; dynamical systems; bifurcation analysis; bursting I. Silicon NeuronsNeuromorphic engineering aims to reproduce the spike-based computation of the brain by morphing its anatomy and physiology into custom silicon chips, which simulate neuronal networks in real-time (i.e., emulate). The basic unit of these chips is the silicon neuron, designed using analog circuits for spike generation and digital ones for spike communication. Engineers have built many silicon neuron chips, ranging in complexity from simple current-to-spike frequency generators to multicompartment, multichannel models, and ranging in density from a single neuromorphic neuron to arrays of ten thousand neurons [1,2,3,4,5,6,7]. Systems of silicon neurons have realized numerous computations, such as visual orientation maps, echolocation, and winner-take-all selection [1,8,9,10].A fundamental choice in designing silicon neurons is selecting the appropriate level of abstraction, with the field segregated into two design styles, a top-down approach and a bottom-up approach. The top-down approach aims to copy neurobiology, building every possible detail into silicon neurons. In this manner, designers aim to ensure that they include all of neurobiology's computing power. This approach comes with a high price; engineers are unable to build large arrays of such complex neurons. Further, even small arrays are difficult to use, suffering from large variations among neurons, pushing them off the precipice into the Valley of Death. 1 On the other hand, the bottom up approach aims to build minimal neuron models, exploiting the inherent features of a technology. Engineers are able to build dense arrays of simple neurons, often expressing little variation, compared to Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an email to pubs-permissions@ieee.org. 1 The Valley of Death is a conceptural region of neuron complexity where neurons are too complex to be well matched and too simple to auto-compensate for variation, resulting in poor system performance [11,12]. Eventually, complexity (∝ transistor count) increases to the degree that neurons can compensate for their variations, as occurs in neurobiology, rescuing system performance [13]. NIH Public AccessAuthor Manu...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.