2020
DOI: 10.1101/2020.09.08.287748
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons

Abstract: Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks hav… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 69 publications
0
9
0
Order By: Relevance
“…The plastic weights learn the correct transformation, in this case, the inverse of the cumulative distribution function F −1 ( u ). This is related to previous work on learning and generating sequences (Nicola and Clopath 2017; Maes et al 2020; Maes et al 2021). In this previous work, the backbone consists not of randomly switching clusters but clusters active sequentially in a chain.…”
Section: Discussionmentioning
confidence: 74%
See 4 more Smart Citations
“…The plastic weights learn the correct transformation, in this case, the inverse of the cumulative distribution function F −1 ( u ). This is related to previous work on learning and generating sequences (Nicola and Clopath 2017; Maes et al 2020; Maes et al 2021). In this previous work, the backbone consists not of randomly switching clusters but clusters active sequentially in a chain.…”
Section: Discussionmentioning
confidence: 74%
“…The plastic weights learn the correct transformation, in this case, the inverse of the cumulative distribution function −1 ( ). This is related to previous work on learning and generating sequences (Nicola and Clopath 2017;Maes et al 2020;Maes et al 2021).…”
Section: Learning With Clustered Networkmentioning
confidence: 77%
See 3 more Smart Citations