For a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-coded networks. We perform experiments for the classical XOR-problem, when posed in a temporal setting, as well as for a number of other benchmark datasets. Comparing the (implicit) number of spiking neurons required for the encoding of the interpolated XOR problem, it is demonstrated that temporal coding requires significantly less neurons than instantaneous rate-coding.2000 Mathematics Subject Classification: 82C32, 68T05, 68T10, 68T30, 92B20. 1998 ACM Computing Classification System: C.1.3, F.1.1, I.2.6, I.5.1. Keywords and Phrases: spiking neurons; temporal coding; error-backpropagation Note: Work carried out under theme SEN4 "Evolutionary Systems and Applied Algorithmics". This paper has been submitted for publication, a short version has been presented at the European Symposium on Artificial Neural Networks 2000 (ESANN'2000) in Bruge, Belgium.
IntroductionDue to its success in artificial neural networks, the sigmoidal neuron is reputed to be a successful model of biological neuronal behavior. By modeling the rate at which a single biological neuron discharges action potentials (spikes) as a monotonically increasing function of inputmatch, many useful applications of artificial neural networks have been build [22; 7; 37; 34] and substantial theoretical insights in the behavior of connectionist structures have been obtained [40; 27].However, the spiking nature of biological neurons has recently led to explorations of the computational power associated with temporal information coding in single spikes [31; 21; 13; 26; 20; 17; 49]. In [32] it was proven that networks of spiking neurons can simulate arbitrary feedforward sigmoidal neural nets and can thus approximate any continuous function. In fact, it has been shown theoretically that spiking neural networks that convey information by individual spike times are computationally more powerful than neurons with sigmoidal activation functions [29].As spikes can be described by 'event' coordinates (place,time) and the number of active (spiking) neurons is typically sparse, artificial spiking neural networks have been shown to allow for very efficient implementations of large neural networks [48; 33]. Single-spike-time computing has also been suggested as a new paradigm for VLSI neural network implementations [28] and would offer a drastic speed-up.Network architectures based on spiking neurons that encode information in the individual spike times have yielded, amongst others, a self-organizing map akin to Kohonen's SOM [39] and a network