2020
DOI: 10.1371/journal.pcbi.1007606
|View full text |Cite
|
Sign up to set email alerts
|

Learning spatiotemporal signals using a recurrent spiking network that discretizes time

Abstract: Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to enc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
92
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 53 publications
(110 citation statements)
references
References 62 publications
5
92
1
Order By: Relevance
“…Models for fast learning have been proposed, and in general they rely on a suited pre-training or preparation of the recurrent network [ 12 , 41 43 ]. This procedure is not task specific.…”
Section: Discussionmentioning
confidence: 99%
“…Models for fast learning have been proposed, and in general they rely on a suited pre-training or preparation of the recurrent network [ 12 , 41 43 ]. This procedure is not task specific.…”
Section: Discussionmentioning
confidence: 99%
“…Interestingly, stability is encoded in time rather than space, which raises the question whether this approach could be combined with a pooling layer, reflecting temporal structure instead of spatial structure. Maes et al (2020) trained a recurrently connected spiking network such that small groups of neurons become active in succession and thus provide the basis for a simple index code. Via a supervisor signal, output neurons are trained to become responsive to a particular group or index from the recurrent network and, thus, fire in a temporal order encoded in the feed-forward weights to the output layer.…”
Section: Discussionmentioning
confidence: 99%
“…To improve stability, recent approaches used feed-forward structures (Pehlevan et al, 2018) or employed supervised learning rules (Laje and Buonomano, 2013). While feed-forward structures provide stable activity patterns, in general these play out on a very fast timescale (Zheng and Triesch, 2014) or require neural/synaptic adaptation such that activity moves between neuron groups (York and Van Rossum, 2009;Itskov et al, 2011;Murray et al, 2017;Maes et al, 2020). And since for supervised learning all states in the network need to be accessible at each computing unit, these so-called global learning rules are not compatible with most neuromorphic hardware.…”
Section: Introductionmentioning
confidence: 99%
“…Modelling studies so far have either focused on the study of sequential dynamics [38][39][40][41] or on motif acquisition [27][28][29]. This paper introduces an explicitly hierarchical model as a fundamental building block for the learning and replay of sequential dynamics of a compositional nature.…”
Section: From Serial To Hierarchical Modellingmentioning
confidence: 99%
“…Here, we present a model for learning temporal sequences on multiple scales implemented through a hierarchical network of bio-realistic spiking neurons and synapses. In contrast to current models, which focus on acquiring the motifs and speculate on the mechanisms to learn a syntax [27][28][29], our spiking network model learns motifs and syntax independently from a target sequence presented repeatedly. Furthermore, the plasticity of the synapses is entirely local, and does not rely on a global optimisation such as FORCE-training [30][31][32] or backpropagation through time [33].…”
Section: Introductionmentioning
confidence: 99%