We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.
Few algorithms for supervised training of spiking neural networks exist that can deal with patterns of multiple spikes, and their computational properties are largely unexplored. We demonstrate in a set of simulations that the ReSuMe learning algorithm can be successfully applied to layered neural networks. Input and output patterns are encoded as spike trains of multiple precisely timed spikes, and the network learns to transform the input trains into target output trains. This is done by combining the ReSuMe learning algorithm with multiplicative scaling of the connections of downstream neurons.We show in particular that layered networks with one hidden layer can learn the basic logical operations, including Exclusive-Or, while networks without hidden layer cannot, mirroring an analogous result for layered networks of rate neurons.While supervised learning in spiking neural networks is not yet fit for technical purposes, exploring computational properties of spiking neural networks advances our understanding of how computations can be done with spike trains.
Abstract-Although some studies have been done on the learning algorithm for spiking neural networks SpikeProp, little has been mentioned about the required input bias neuron that sets the reference time start. This paper examines the importance of the reference time in neural networks based on temporal encoding. The findings refute previous assumptions about the reference start time.
In this paper, a feed forward spiking neural network is tested with spike train patterns with additional and missing spikes. The network is trained with noisy and distorted patterns with an extension of the ReSuMe learning rule to networks with hidden layers. The results show that the multilayer ReSuMe can reliably learn to discriminate highly distorted patterns spanning over 500 ms.
Abstract. Reward-modulated learning rules for spiking neural networks have emerged, that have been demonstrated to solve a wide range of reinforcement learning tasks. Despite this, few attempts have been made in teaching a spiking network to learn target spike trains. Here, we apply a reward-maximising learning rule to teach a spiking neural network to map between multiple input patterns and single-spike target trains. Furthermore, we compare the performance of two escape rate functions that drive output spiking activity: the Arrhenius & Current (A&C) model and Exponential (EXP) model. We find A&C consistently outperforms EXP: both in terms of the accuracy of responses and the time taken to converge in learning. We also show that jittering input patterns with a low noise amplitude leads to an improvement in learning, especially by reducing fluctuations in the network responses.
In this chapter we give a brief overview of the biological and technical background of artificial neural networks as are used in cognitive modelling and in technical applications. This will be complemented by three instructive case studies which demonstrate the use of different neural networks in cognitive modelling.
Abstract-The present paper is investigating the modelling of the McGurk effect, an audio-visual speech perceptual illusion, with a distributed model of memory. The network is trained with congruent auditory and visual patterns and tested with incongruent sets of patterns considered to produce the McGurk effect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.