2023
DOI: 10.7554/elife.82914
|View full text |Cite
|
Sign up to set email alerts
|

Task-dependent optimal representations for cerebellar learning

Marjorie Xie,
Samuel P Muscinelli,
Kameron Decker Harris
et al.

Abstract: The cerebellar granule cell layer has inspired numerous theoretical models of neural representations that support learned behaviors, beginning with the work of Marr and Albus. In these models, granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli. However, recent observations of dense, low-dimensional activity across granule cells have called into question the role of sparse coding in these neurons… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 74 publications
(218 reference statements)
0
1
0
Order By: Relevance
“…Normative models developed to clarify the role of compression and expansion in feedforward neural networks have improved our understanding of the computations in many brain areas including the retina, 15 , 16 primary visual cortex, 17 , 18 olfactory bulb, 19 , 20 and cerebellum. 21 24 For instance, theories of compressed sensing and efficient coding have shown that, whereas random compression can preserve the similarity structure of sparse representations, the optimal compression strategy is to extract the principal components (PCs) when inputs are strongly correlated. 25 Unfortunately, insights gained from analyzing feedforward networks cannot be directly applied to understand signal transformation in CTC loops due to their recurrent processing.…”
Section: Introductionmentioning
confidence: 99%
“…Normative models developed to clarify the role of compression and expansion in feedforward neural networks have improved our understanding of the computations in many brain areas including the retina, 15 , 16 primary visual cortex, 17 , 18 olfactory bulb, 19 , 20 and cerebellum. 21 24 For instance, theories of compressed sensing and efficient coding have shown that, whereas random compression can preserve the similarity structure of sparse representations, the optimal compression strategy is to extract the principal components (PCs) when inputs are strongly correlated. 25 Unfortunately, insights gained from analyzing feedforward networks cannot be directly applied to understand signal transformation in CTC loops due to their recurrent processing.…”
Section: Introductionmentioning
confidence: 99%