Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow spiking network architectures have limited capacity for expressing complex representations, while training a very deep spiking network has not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-line trained deep Artificial Neural Networks (ANNs) to SNNs. However, ANN-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous and non-differentiable nature of spike generation function. To overcome this problem, we propose an approximate derivative method that accounts for leaky behavior of LIF neuron. This method enables training of deep convolutional SNNs with input spike events using spike-based backpropagation algorithm. Our experiments show the effectiveness of the proposed spike-based learning strategy on state-of-the-art deep networks (VGG and Residual architectures) by achieving the best classification accuracies in MNIST, SVHN and CIFAR-10 datasets compared to other SNNs trained with spike-based learning. Moreover, we analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain. and show remarkable results, which occasionally outperform human-level performance [20,13,40]. To that effect, deploying deep learning is becoming necessary not only on large-scale computers, but also on edge devices (e.g. phone, tablet, smart watch, robot etc.). However, the ever-growing complexity of the state-of-the-art deep neural networks together with the explosion in the amount of data to be processed, place significant energy demands on current computing platforms. For example, a deep ANN model requires unprecedented amount of computing hardware resources that often requires huge computing power of cloud servers and significant amount of time to train.Spiking Neural Network (SNN) is one of the leading candidates for overcoming the constraints of neural computing and to efficiently harness the machine learning algorithm in real-life (or mobile) applications [28,5]. The concepts of SNN, which is often regarded as the 3 rd generation neural network [27], are inspired by biologically plausible Leaky Integrate and Fire (LIF) spiking neuron models [6] that can efficiently process spatio-temporal information. The LIF neuron model is characterized by the internal state, called membrane potential, that integrates the inputs over time and generates an output spike whenever it overcomes the neuronal firing threshold. This mechanism enables event-based and asynchronous computations across the layers on spiking systems, which makes it naturally suitable for ultra-low power computing. Furthermore, recent works [38,35] have shown that these properties make SNNs significantly more attractive for deeper networks in the case of h...
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
The available cultivable plant‐based food resources in developing tropical countries are inadequate to supply proteins for both human and animals. Such limition of available plant food sources are due to shrinking of agricultural land, rapid urbanization, climate change, and tough competition between food and feed industries for existing food and feed crops. However, the cheapest food materials are those that are derived from plant sources which although they occur in abundance in nature, are still underutilized. At this juncture, identification, evaluation, and introduction of underexploited millet crops, including crops of tribal utility which are generally rich in protein is one of the long‐term viable solutions for a sustainable supply of food and feed materials. In view of the above, the present review endeavors to highlight the nutritional and functional potential of underexploited millet crops. Practical applications Millets are an important food crop at a global level with a significant economic impact on developing countries. Millets have advantageous characteristics as they are drought and pest‐resistance grains. Millets are considered as high‐energy yielding nourishing foods which help in addressing malnutrition. Millet‐based foods are considered as potential prebiotic and probiotics with prospective health benefits. Grains of these millet species are widely consumed as a source of traditional medicines and important food to preserve health.
Trees are used by animals, humans and machines to classify information and make decisions. Natural tree structures displayed by synapses of the brain involves potentiation and depression capable of branching and is essential for survival and learning. Demonstration of such features in synthetic matter is challenging due to the need to host a complex energy landscape capable of learning, memory and electrical interrogation. We report experimental realization of tree-like conductance states at room temperature in strongly correlated perovskite nickelates by modulating proton distribution under high speed electric pulses. This demonstration represents physical realization of ultrametric trees, a concept from number theory applied to the study of spin glasses in physics that inspired early neural network theory dating almost forty years ago. We apply the tree-like memory features in spiking neural networks to demonstrate high fidelity object recognition, and in future can open new directions for neuromorphic computing and artificial intelligence.
In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.