2020
DOI: 10.1101/2020.03.24.005132
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Self-Supervised Deep Neural Network for Image Completion Resembles Early Visual Cortex fMRI Activity Patterns for Occluded Scenes

Abstract: The promise of artificial intelligence in understanding biological vision relies on the comparison of computational models with brain data that captures functional principles of visual information processing. Deep neural networks (DNN) have successfully matched the transformations in hierarchical processing occurring along the brain's feedforward visual pathway extending into ventral temporal cortex. However, we are still to learn if DNNs can successfully describe feedback processes in early visual cortex. Her… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 62 publications
0
1
0
Order By: Relevance
“…The generative nature of this method makes it suitable for modeling different top-down modulations and feedback processing. To date, these models have been used to study effects of top-down feedback in ventral pathway [76, 2] and to model predictive coding [37], mental imagery [7] and continual learning [83]. More generally, these models can also be used for representation learning, where they can be trained using self-supervised methods to generate the visual input.…”
Section: Discussionmentioning
confidence: 99%
“…The generative nature of this method makes it suitable for modeling different top-down modulations and feedback processing. To date, these models have been used to study effects of top-down feedback in ventral pathway [76, 2] and to model predictive coding [37], mental imagery [7] and continual learning [83]. More generally, these models can also be used for representation learning, where they can be trained using self-supervised methods to generate the visual input.…”
Section: Discussionmentioning
confidence: 99%