Abstract:Brain-inspired learning mechanisms, e.g. spike timing dependent plasticity (STDP), enable agile and fast on-thefly adaptation capability in a spiking neural network. When incorporating emerging nanoscale resistive non-volatile memory (NVM) devices, with ultra-low power consumption and highdensity integration capability, a spiking neural network hardware would result in several orders of magnitude reduction in energy consumption at a very small form factor and potentially herald autonomous learning machines. Ho… Show more
“…When incorporating nanoscale resistive non-volatile memory components (high-density integration capability and extremely low energy consumption), a hardware with SNNs will have several orders of reduction in energy consumption. A dendriticinspired processing architecture was presented in addition to complementary metal-oxide semiconductor (CMOS) neuron circuits (Wu and Saxena, 2018). Brain-inspired circuit design is thwarted by two limits: (1) understanding the event-driven spike processing of the human brain and (2) developing predictive models for the design and optimization of cognitive circuits.…”
Neuroscience and brain-inspired artificial intelligence are significant research areas. Many countries have launched brain-related projects in which neuroscience and brain-inspired artificial intelligence are major targeted areas to increase national interests and enhance their strength in key areas such as military and homeland security in the competitive global world. Methods, emerging technologies, and progress in neuroscience and brain-inspired artificial intelligence are introduced in this paper that specifically include brain-inspired computing, brain association graph, brain networks, the connectome, brain reconstruction, imaging technologies used for the brain, chips and devices inspired by the human brain, brain-computer interface or brain-machine interfaces, cyborg, neuro-robotics, and quantum robotics. Challenges in some of the topics are also presented and discussed.
“…When incorporating nanoscale resistive non-volatile memory components (high-density integration capability and extremely low energy consumption), a hardware with SNNs will have several orders of reduction in energy consumption. A dendriticinspired processing architecture was presented in addition to complementary metal-oxide semiconductor (CMOS) neuron circuits (Wu and Saxena, 2018). Brain-inspired circuit design is thwarted by two limits: (1) understanding the event-driven spike processing of the human brain and (2) developing predictive models for the design and optimization of cognitive circuits.…”
Neuroscience and brain-inspired artificial intelligence are significant research areas. Many countries have launched brain-related projects in which neuroscience and brain-inspired artificial intelligence are major targeted areas to increase national interests and enhance their strength in key areas such as military and homeland security in the competitive global world. Methods, emerging technologies, and progress in neuroscience and brain-inspired artificial intelligence are introduced in this paper that specifically include brain-inspired computing, brain association graph, brain networks, the connectome, brain reconstruction, imaging technologies used for the brain, chips and devices inspired by the human brain, brain-computer interface or brain-machine interfaces, cyborg, neuro-robotics, and quantum robotics. Challenges in some of the topics are also presented and discussed.
“…Here, several (say M = 16) stochastic memristors were employed in parallel to obtain an approximate resolution of log 2 M = 4 bits on average. This concept was extended to include presynapstic axonal attenuation with parallel stochastic switching RRAMs [95,96]. Recently, the concept was further expanded to combine axonal (presynaptic) as well as dendritic (postsynaptic) processing [55].…”
Section: Compound Synapse With Axonal and Dendritic Processingmentioning
confidence: 99%
“…Assuming Gaussian distribution of the program/erase threshold voltages, the stochastic switching behavior of the bistable RRAM device is given by cumulative probability p(V) = P(|V| > |V th +/− |) for a voltage drop of V across the device. This is expressed as [95,96]…”
Section: Compound Synapse With Axonal and Dendritic Processingmentioning
confidence: 99%
“…Each dot in the plots represents the probability density of the particular ∆w transition between −16 and 16. With dendritic processing, a double exponential curve is fitted to the simulated STDP window with <1-unit fitting error; STDP window without dendrites has approximately 4-unit error when fitted to the double exponential [95,96]. Moreover, the axonal and dendrite coefficients, α i and β j , and potentially their respective time delays, can be customized to implement a wide range of STDP learning windows.…”
Section: Compound Synapse With Axonal and Dendritic Processingmentioning
The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.
“…This is a new paradigm for implementing artificial neural networks using mechanisms that incorporate spike-timing dependent plasticity which is a learning algorithm discovered by neuroscientists [9] [21]. The promise of spiking networks is that they are less computationally intensive and much more energy efficient as the spiking algorithms can be implemented on a neuromorphic chip such as Intel's LOIHI chip [3] (operates at low power because it runs asynchronously using spikes) and other neuromorphic chips [40] [39] [41] [31]. Our work is based on the work of Masquelier and Thorpe [23] [22], and Kheradpisheh et al [14] [13].…”
Spiking neural networks are biologically plausible counterparts of the artificial neural networks, artificial neural networks are usually trained with stochastic gradient descent and spiking neural networks are trained with spike timing dependant plasticity. Training deep convolutional neural networks is a memory and power intensive job. Spiking networks could potentially help in reducing the power usage. There is a large pool of tools for one to chose to train artificial neural networks of any size, on the other hand all the available tools to simulate spiking neural networks are geared towards computational neuroscience applications and they are not suitable for real life applications. In this work we focus on implementing a spiking CNN using Tensorflow to examine behaviour of the network and study catastrophic forgetting in the spiking CNN and weight initialization problem in R-STDP using MNIST data set. We also report classification accuracies that are achieved using N-MNIST and MNIST data sets. CCS CONCEPTS • Computing methodologies → Machine learning; Machine learning approaches; Bio-inspired approaches;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.