2023
DOI: 10.1037/xge0001415
|View full text |Cite
|
Sign up to set email alerts
|

Building integrated representations through interleaved learning.

Abstract: Inferring relationships that go beyond our direct experience is essential for understanding our environment. This capacity requires either building representations that directly reflect structure across experiences as we encounter them or deriving the indirect relationships across experiences as the need arises. Building structure directly into overlapping representations allows for powerful learning and generalization in neural network models, but building these so-called distributed representations requires … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 76 publications
0
2
0
Order By: Relevance
“…This raises the possibility that slowness, not only in feature dynamics but also in task rules, may aid learning. However, it is worth noting that interleaved training might promote the formation of more generaliseable representations [63], suggesting that the optimal learning curriculum may differ depending on the task at hand. In sum, multiple lines of research point toward a beneficial effect of slowness on learning.…”
Section: Discussionmentioning
confidence: 99%
“…This raises the possibility that slowness, not only in feature dynamics but also in task rules, may aid learning. However, it is worth noting that interleaved training might promote the formation of more generaliseable representations [63], suggesting that the optimal learning curriculum may differ depending on the task at hand. In sum, multiple lines of research point toward a beneficial effect of slowness on learning.…”
Section: Discussionmentioning
confidence: 99%
“…This suggests that a control mechanism that enhances one pathway over another depending on the task would be beneficial for behavior. In a recent paper, we adopted a version of C-HORSE that invoked such a control function in order to explain behavior across tasks with different demands in an associative inference paradigm ( Zhou et al, 2023 ). Medial prefrontal cortex could potentially carry out a control function of this kind ( Sherman et al, 2023 ), as it participates in category learning ( Mack et al, 2020 ) and is known to modulate CA1 representations as a function of task ( Eichenbaum, 2017 ; Guise and Shapiro, 2017 ).…”
Section: Discussionmentioning
confidence: 99%
“…We adopted a neural network model of the hippocampus developed after a lineage of models used to explain how the DG, CA3, and CA1 subfields of the hippocampus contribute to episodic memory ( Ketz et al, 2013 ; Norman and O’Reilly, 2003 ; O’Reilly and Rudy, 2001 ). This variant, C-HORSE, was developed recently to account for the role of the hippocampus in statistical learning ( Schapiro et al, 2017b ; Zhou et al, 2023 ). Simulations were performed in the Emergent simulation environment (version 7.0.1, Aisa et al, 2008 , O’Reilly et al, 2014a ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the proposed role of replay in reorganising memory to extend the cognitive map beyond direct experience, and evidence to suggest that TMR can bias the content of replay, we predicted that TMR might provide a tool for investigating the consequences of reorganising memory during periods of rest. Thus, we sought to test the hypotheses that TMR improves inferential choice and that this improvement is driven by the formation of new inferred links (or 'shortcuts') between cues that have not been experienced together 22,30 .…”
Section: Introductionmentioning
confidence: 99%