“…IV-C) also uses this OTA and the output current multiplier is implemented only there. Secondly, another parallel circuit -a current mirror formed by M 19,20 and M 21,22 is used as an offset calibration circuit at the OTA output. I offset is then the calibration current that is set equal to the residual offset current, caused for example as a result of input offset voltage between the OTA terminals.…”
We present an array of leaky integrate-and-fire (LIF) neuron circuits designed for the second-generation BrainScaleS mixed-signal 65-nm CMOS neuromorphic hardware. The neuronal array is embedded in the analog network core of a scaleddown prototype HICANN-DLS chip. Designed as continuoustime circuits, the neurons are highly tunable and reconfigurable elements with accelerated dynamics. Each neuron integrates input current from a multitude of incoming synapses and evokes a digital spike event output. The circuit offers a wide tuning range for synaptic and membrane time constants, as well as for refractory periods to cover a number of computational models. We elucidate our design methodology, underlying circuit design, calibration and measurement results from individual subcircuits across multiple dies. The circuit dynamics match with the behavior of the LIF mathematical model. We further demonstrate a winner-take-all network on the prototype chip as a typical element of cortical processing.
I . I N T R O D U C T I O NT HE architecture of digital microprocessors is fundamentally different from that of the central nervous system. While the brain is a massively parallel structure of neurons interconnected through synapses [1], microprocessors are mostly based on a von Neumann architecture [2], [3] with logic gates as the elementary primitives. The human brain consumes only approximately 20 W [4], while its performance as a generalpurpose problem solver is still unmatched by any computer algorithm.Taking inspiration from this biological feat, neuromorphic architectures not only adopt a non-von Neumann architecture by collocating memory close to the computational element, but also introduce massive parallelism, high energy efficiency, reconfigurability, fault tolerance, and integrate computational *Both authors contributed equally to this work.
“…IV-C) also uses this OTA and the output current multiplier is implemented only there. Secondly, another parallel circuit -a current mirror formed by M 19,20 and M 21,22 is used as an offset calibration circuit at the OTA output. I offset is then the calibration current that is set equal to the residual offset current, caused for example as a result of input offset voltage between the OTA terminals.…”
We present an array of leaky integrate-and-fire (LIF) neuron circuits designed for the second-generation BrainScaleS mixed-signal 65-nm CMOS neuromorphic hardware. The neuronal array is embedded in the analog network core of a scaleddown prototype HICANN-DLS chip. Designed as continuoustime circuits, the neurons are highly tunable and reconfigurable elements with accelerated dynamics. Each neuron integrates input current from a multitude of incoming synapses and evokes a digital spike event output. The circuit offers a wide tuning range for synaptic and membrane time constants, as well as for refractory periods to cover a number of computational models. We elucidate our design methodology, underlying circuit design, calibration and measurement results from individual subcircuits across multiple dies. The circuit dynamics match with the behavior of the LIF mathematical model. We further demonstrate a winner-take-all network on the prototype chip as a typical element of cortical processing.
I . I N T R O D U C T I O NT HE architecture of digital microprocessors is fundamentally different from that of the central nervous system. While the brain is a massively parallel structure of neurons interconnected through synapses [1], microprocessors are mostly based on a von Neumann architecture [2], [3] with logic gates as the elementary primitives. The human brain consumes only approximately 20 W [4], while its performance as a generalpurpose problem solver is still unmatched by any computer algorithm.Taking inspiration from this biological feat, neuromorphic architectures not only adopt a non-von Neumann architecture by collocating memory close to the computational element, but also introduce massive parallelism, high energy efficiency, reconfigurability, fault tolerance, and integrate computational *Both authors contributed equally to this work.
“…The high number of transistors required for imitating both neurons and synapses, and the related power dissipation issues limit the prospects of large-scale and dense stacking [7], [11]. Existing all-CMOS-based prototypes of neuromorphic systems developed in academia (e.g., the Human Brain Flagship consortium in the European Union [10], [12]) and industry [13] have restricted capabilities.…”
Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultra-high-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal–oxide–semiconductor (CMOS) bioinspired hardware.
“…Renewed interest in neuromorphic photonics has been heralded by advances in photonic integration technology [1][2][3], roadblocks in conventional computing performance [4,5], the return of neuromorphic electronics [6][7][8][9][10], and the inundation of machine learning (ML) with neural models [11]. Neural networks have held some role in ML (e.g.…”
There has been a recently renewed interest in neuromorphic photonics, a field promising to access pivotal and unexplored regimes of machine intelligence. Progress has been made on isolated neurons and analog interconnects; nevertheless, this renewal has yet to produce a demonstration of a silicon photonic neuron capable of interacting with other like neurons. We report a modulator-class photonic neuron fabricated in a conventional silicon photonic process line. We demonstrate behaviors of transfer function configurability, fan-in, inhibition, time-resolved processing, and, crucially, autaptic cascadability -a sufficient set of behaviors for a device to act as a neuron participating in a network of like neurons. The silicon photonic modulator neuron constitutes the final piece needed to make photonic neural networks fully integrated on currently available silicon photonic platforms.In this work, we fabricate and demonstrate a silicon photonic modulator neuron. It consists of a balanced photodetector directly connected to a microring (MRR) modulator. We demonstrate that this device possesses the necessary capabilities of a network-compatible neuron: fan-in, high-gain optical-to-optical nonlinearity, and arXiv:1812.11898v1 [physics.app-ph]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.