2008
DOI: 10.1016/j.neucom.2007.12.027
|View full text |Cite
|
Sign up to set email alerts
|

Delay learning and polychronization for reservoir computing

Abstract: We propose a multi-timescale learning rule for spiking neuron networks, in the line of the recently emerging field of reservoir computing. The reservoir is a network model of spiking neurons, with random topology and driven by STDP (spike-time-dependent plasticity), a temporal Hebbian unsupervised learning mode, biologically observed. The model is further driven by a supervised learning algorithm, based on a margin criterion, that affects the synaptic delays linking the network to the readout neurons, with cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
75
0
3

Year Published

2009
2009
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 94 publications
(81 citation statements)
references
References 47 publications
(55 reference statements)
3
75
0
3
Order By: Relevance
“…Another approach, by Paugam-Moisy et al [127], takes advantage of the theoretical results proving the importance of delays in computing with spiking neurons (see Section 3) for defining a supervised learning rule acting on the delays of connections (instead of weights) between the reservoir and the readout neurons. The reservoir is an SNN, with an STDP rule for adapting the weights to the task at hand, where can be observed that polychronous groups (see Section 3.2) are activated more and more selectively as training goes on.…”
Section: Related Reservoir Computing Workmentioning
confidence: 99%
“…Another approach, by Paugam-Moisy et al [127], takes advantage of the theoretical results proving the importance of delays in computing with spiking neurons (see Section 3) for defining a supervised learning rule acting on the delays of connections (instead of weights) between the reservoir and the readout neurons. The reservoir is an SNN, with an STDP rule for adapting the weights to the task at hand, where can be observed that polychronous groups (see Section 3.2) are activated more and more selectively as training goes on.…”
Section: Related Reservoir Computing Workmentioning
confidence: 99%
“…Learning is implemented in a Hebbian paradigm, considering both spike rate and timings of both pre-synaptic and post-synaptic neurons in a learning window [9]. In a learning trial with 500 milliseconds (ms) simulated time, the time window is divided into 100 ms (T=100) wide overlapping bins at 50 ms intervals (Fig.…”
Section: Learning Implementationmentioning
confidence: 99%
“…The weight adjustments, ∆W are calculated as a function of time difference, ∆t = t j (f) -t i (f) , where t j (f) and t i (f) are the last firing times of post-synaptic neuron j and pre-synaptic neuron i, respectively, within the learning time bin (Fig. 2) [9]. To avoid saturation of synaptic strength values infinitely, we keep the values within the range 0 to 3.…”
Section: Learning Implementationmentioning
confidence: 99%
See 1 more Smart Citation
“…Redes de neurônios podem também processar, representar, ou codificar estímulos por meio de cadeias de neurônios pulsando sincronamente, o que traz a noção de assembleias de neurônios [47], [80], [81], [44].…”
Section: C) Pela Correlação E Sincronismo (Correlations and Synchrony)unclassified