2020
DOI: 10.1093/cercor/bhaa269
|View full text |Cite
|
Sign up to set email alerts
|

Visual and Semantic Representations Predict Subsequent Memory in Perceptual and Conceptual Memory Tests

Abstract: It is generally assumed that the encoding of a single event generates multiple memory representations, which contribute differently to subsequent episodic memory. We used functional magnetic resonance imaging (fMRI) and representational similarity analysis to examine how visual and semantic representations predicted subsequent memory for single item encoding (e.g., seeing an orange). Three levels of visual representations corresponding to early, middle, and late visual processing stages were based on a deep ne… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
32
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 45 publications
(45 citation statements)
references
References 102 publications
2
32
0
Order By: Relevance
“…2). These results converge with Davis et al (2020)'s recent finding that RSA model fit for an early layer of a deep convolutional neural network (DNN) in early visual cortex predicted later memory for pictures. Our data point to specific lower-level properties available in the presented images that contribute to memory.…”
Section: Discussionsupporting
confidence: 89%
“…2). These results converge with Davis et al (2020)'s recent finding that RSA model fit for an early layer of a deep convolutional neural network (DNN) in early visual cortex predicted later memory for pictures. Our data point to specific lower-level properties available in the presented images that contribute to memory.…”
Section: Discussionsupporting
confidence: 89%
“…We quantified the visual similarity using the penultimate convolutional layer of a pretrained convolutional neural network called AlexNet (Krizhevsky et al 2012). Deep neural networks such as AlexNet are becoming increasing popular in visual neuroscience (Gauthier & Tarr, 2016;Kriegeskorte, 2015;Davis et al, 2021). AlexNet was trained on more than a million pictures of the ImageNet database (http://www.image-net.org).…”
Section: Confirming the Relationships Of The Target To Visual And Conceptual Cuesmentioning
confidence: 99%
“…AlexNet was trained on more than a million pictures of the ImageNet database (http://www.image-net.org). Visual similarity between two objects was quantified as the Spearman correlation between each cell in the penultimate (fully connected) layer of AlexNet (for more details, see Davis et al, 2021…”
Section: Confirming the Relationships Of The Target To Visual And Conceptual Cuesmentioning
confidence: 99%
“…This converges with Martin et al's (2018) finding that FG activity reflected representations of explicitly rated visual object features. Davis et al (2020) reported that in FG the mid-layer of a visual DNN predicted memory for object names when the objects were forgotten, while semantic features of the object images predicted memory for the images when the names were forgotten. Our findings clarify that both image-based visual codes and non-image-based semantic feature codes are represented during successful encoding.…”
Section: Discussionmentioning
confidence: 99%