2024
DOI: 10.1038/s41562-023-01799-z
|View full text |Cite
|
Sign up to set email alerts
|

A generative model of memory construction and consolidation

Eleanor Spens,
Neil Burgess

Abstract: Episodic memories are (re)constructed, share neural substrates with imagination, combine unique features with schema-based predictions and show schema-based distortions that increase with consolidation. Here we present a computational model in which hippocampal replay (from an autoassociative network) trains generative models (variational autoencoders) to (re)create sensory experiences from latent variable representations in entorhinal, medial prefrontal and anterolateral temporal cortices via the hippocampal … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 132 publications
0
4
0
Order By: Relevance
“…Changing these alignment dynamics may require a fundamentally different learning approach: instead of changing many weights by a small amount for each example, networks could limit each weight update to a small subset of weights, thereby limiting interference between examples. In this context, work on combining deep networks with an explicit memory mechanism [49, 50] are promising.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Changing these alignment dynamics may require a fundamentally different learning approach: instead of changing many weights by a small amount for each example, networks could limit each weight update to a small subset of weights, thereby limiting interference between examples. In this context, work on combining deep networks with an explicit memory mechanism [49, 50] are promising.…”
Section: Discussionmentioning
confidence: 99%
“…Rather than being represented directly in the ventral stream, object shape may be computed by interactions between the ventral and dorsal visual streams [52, 54] or with an additional decoding step [53]. An interesting candidate would be the medial temporal lobe and hippocampus, which could receive ventral stream representations as an input and store unique features of an image [50], such as the global configuration.…”
Section: Discussionmentioning
confidence: 99%
“…In our analysis approach we treated successively presented stimuli as trajectories, whereby compared to most previous findings of entorhinal grid-like representations, participants did not experience or explicitly imagine the transitions. Grid cell firing is assumed to reflect the latent structure of an environment (Stachenfeld et al, 2017; Whittington et al, 2020; Spens & Burgess, 2024), which might support vector-based navigation and generalization across similarly structured environments. A recent memory model (Spens & Burgess, 2024) further suggests that shared category features might initially be stored in entorhinal cortex as latent variables that are used for memory retrieval in the hippocampus.…”
Section: Discussionmentioning
confidence: 99%
“…Grid cell firing is assumed to reflect the latent structure of an environment (Stachenfeld et al, 2017; Whittington et al, 2020; Spens & Burgess, 2024), which might support vector-based navigation and generalization across similarly structured environments. A recent memory model (Spens & Burgess, 2024) further suggests that shared category features might initially be stored in entorhinal cortex as latent variables that are used for memory retrieval in the hippocampus. In the present study, the grid-like representation was present only after, not before, participants completed the categorization training.…”
Section: Discussionmentioning
confidence: 99%