In this paper, we reviewed the recent trends on neuromorphic computing using emerging memory technologies. Two representative learning algorithms used to implement a hardwarebased neural network are described as a bio-inspired learning algorithm and software-based learning algorithm, in particular back-propagation. The requirements of the synaptic device to apply each algorithm were analyzed. Then, we reviewed the research trends of synaptic devices to implement an artificial neural network.
Hardware-based spiking neural networks (SNNs) to mimic biological neurons have been reported. However, conventional neuron circuits in SNNs have a large area and high power consumption. In this work, a split-gate floating-body positive feedback (PF) device with a charge trapping capability is proposed as a new neuron device that imitates the integrate-and-fire function. Because of the PF characteristic, the subthreshold swing (SS) of the device is less than 0.04 mV/dec. The super-steep SS of the device leads to a low energy consumption of ∼0.25 pJ/spike for a neuron circuit (PF neuron) with the PF device, which is ∼100 times smaller than that of a conventional neuron circuit. The charge storage properties of the device mimic the integrate function of biological neurons without a large membrane capacitor, reducing the PF neuron area by about 17 times compared to that of a conventional neuron. We demonstrate the successful operation of a dense multiple PF neuron system with reset and lateral inhibition using a common self-controller in a neuron layer through simulation. With the multiple PF neuron system and the synapse array, on-line unsupervised pattern learning and recognition are successfully performed to demonstrate the feasibility of our PF device in a neural network.
A positive-feedback (PF) neuron device capable of threshold tuning and simultaneously processing excitatory (G +) and inhibitory (G-) signals is experimentally demonstrated to replace conventional neuron circuits, for the first time. Thanks to the PF operation, the PF neuron device with steep switching characteristics can implement integrate-and-fire (IF) function of neurons with low-energy consumption. The structure of the PF neuron device efficiently merges a gated PNPN diode and a single MOSFET. Integrateand-fire (IF) operation with steep subthreshold swing (SS < 1 mV/dec) is experimentally implemented by carriers accumulated in an n floating body of the PF neuron device. The carriers accumulated in the n floating body are discharged by an inhibitory signal applied to the merged FET. Moreover, the threshold voltage (Vth) of the proposed PF neuron is controlled by using a charge storage layer. The low-energy consuming PF neuron circuit (~ 0.62 pJ/spike) consists of one PF device and only five MOSFETs for the IF and reset operation. In a high-level system simulation, a deep-spiking neural network (D-SNN) based on PF neurons with four hidden layers (1024 neurons in each layer) shows high-accuracy (98.55%) during a MNIST classification task. The PF neuron device provides a viable solution for high-density and low-energy neuromorphic systems.
Hardware-based Spiking Neural Networks (SNNs) are regarded as promising candidates for the cognitive computing system due to its low power consumption and highly parallel operation. In this paper, we train the SNN in which the firing time carries information using temporal backpropagation. The temporally encoded SNN with 512 hidden neurons achieved an accuracy of 96.90% for the MNIST test set. Furthermore, the effect of the device variation on the accuracy in temporally encoded SNN is investigated and compared with that of the rate-encoded network. In a hardware configuration of our SNN, NOR-type analog memory having an asymmetric floating gate is used as a synaptic device. In addition, we propose a neuron circuit including a refractory period generator for temporally encoded SNN. The performance of the 2-layer neural network composed of synapses and proposed neurons is evaluated through circuit simulation using SPICE based on the BSIM3v3 model with 0.35 μm technology. The network with 128 hidden neurons achieved an accuracy of 94.9%, a 0.1% reduction compared to that of the system simulation of the MNIST dataset. Finally, each block's latency and power consumption constituting the temporal network is analyzed and compared with those of the rate-encoded network depending on the total time step. Assuming that the network has 256 total time steps, the temporal network consumes 15.12 times less power than the rateencoded network and makes decisions 5.68 times faster.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.