2023
DOI: 10.1101/2023.09.13.557585
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Probing learning through the lens of changes in circuit dynamics

Owen Marschall,
Cristina Savin

Abstract: Despite the success of dynamical systems as accounts of circuit computation and observed behavior, our understanding of how dynamical systems evolve over learning is very limited. Here we develop a computational framework for extracting core dynamical systems features of recurrent circuits across learning and analyze the properties of these meta-dynamics in model analogues of several brain-relevant tasks. Across learning algorithms and tasks we find a stereotyped path to task mastery, which involves the creati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 39 publications
1
4
0
Order By: Relevance
“…Once the wait time task was introduced, features were trimmed from the network's dynamic repertoire stabilizing to the final computational motifs supporting the target task. This expansion and then pruning of dynamical features over training has been previously reported in other tasks [28], and appears to be a common learning trait for attractor-based dynamics. Interestingly, while the STR layer had similar numbers of dynamical features across learning procedures, the total number of dynamical features in the OFC layer at the end of kindergarten CL training conserved a higher number of features than sub-optimal networks trained with classical CL (Fig.…”
Section: Kindergarten CL Produces Distinct Dynamical Task Solutionssupporting
confidence: 67%
See 4 more Smart Citations
“…Once the wait time task was introduced, features were trimmed from the network's dynamic repertoire stabilizing to the final computational motifs supporting the target task. This expansion and then pruning of dynamical features over training has been previously reported in other tasks [28], and appears to be a common learning trait for attractor-based dynamics. Interestingly, while the STR layer had similar numbers of dynamical features across learning procedures, the total number of dynamical features in the OFC layer at the end of kindergarten CL training conserved a higher number of features than sub-optimal networks trained with classical CL (Fig.…”
Section: Kindergarten CL Produces Distinct Dynamical Task Solutionssupporting
confidence: 67%
“…4d). We calculated the average change in recurrent connection weights over learning (see Methods), and found that large changes in network structure occurred at the beginning of each phase of training for any form of training, as networks reorganize from a random initial state to create low-dimensional manifolds supporting structured behavior [28]. Unexpectedly, we found that kindergarten CL had a dramatic reorganization of the network during the kindergarten phases of training, in particular when learning to perform inference of latent states.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations