2021
DOI: 10.1101/2021.06.28.450213
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Directly interfacing brain and deep networks exposes non-hierarchical visual processing

Abstract: One reason the mammalian visual system is viewed as hierarchical, such that successive stages of processing contain ever higher-level information, is because of functional correspondences with deep convolutional neural networks (DCNNs). However, these correspondences between brain and model activity involve shared, not task-relevant, variance. We propose a stricter test of correspondence: If a DCNN layer corresponds to a brain region, then replacing model activity with brain activity should successfully drive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

3
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 29 publications
(63 reference statements)
3
0
0
Order By: Relevance
“…This findings is consistent with other recent work showing that widely varying learning objectives and architectures—including transformer architectures from computational linguistics—are sufficient to produce state-of-the-art encoding performance in visual cortex, which suggests that these design factors are not the primary explanation for the success of DNNs in visual neuroscience (Conwell et al, 2022; Konkle & Alvarez, 2022; Zhuang et al, 2021). Our findings are also consistent with recent work that calls into question the apparent hierarchical correspondence between DNNs and visual cortex (Sexton & Love, 2021). Indeed, we found that the relationship between latent dimensionality and encoding performance generalized across layer depth, meaning that even within a single layer of a DNN hierarchy, encoding performance can widely vary as a function of latent dimensionality.…”
Section: Discussionsupporting
confidence: 93%
See 1 more Smart Citation
“…This findings is consistent with other recent work showing that widely varying learning objectives and architectures—including transformer architectures from computational linguistics—are sufficient to produce state-of-the-art encoding performance in visual cortex, which suggests that these design factors are not the primary explanation for the success of DNNs in visual neuroscience (Conwell et al, 2022; Konkle & Alvarez, 2022; Zhuang et al, 2021). Our findings are also consistent with recent work that calls into question the apparent hierarchical correspondence between DNNs and visual cortex (Sexton & Love, 2021). Indeed, we found that the relationship between latent dimensionality and encoding performance generalized across layer depth, meaning that even within a single layer of a DNN hierarchy, encoding performance can widely vary as a function of latent dimensionality.…”
Section: Discussionsupporting
confidence: 93%
“…Our findings are also consistent with recent work that calls into question the apparent hierarchical correspondence between DNNs and visual cortex (Sexton & Love, 2021). Indeed, we found that the relationship between latent dimensionality and encoding performance generalized across layer depth, meaning that even within a single layer of a DNN hierarchy, encoding performance can widely vary as a function of latent dimensionality.…”
Section: Discussionsupporting
confidence: 92%
“…This finding is consistent with other recent work showing that widely varying learning objectives and architectures—including transformer architectures from computational linguistics—are sufficient to produce state-of-the-art encoding performance in visual cortex, which suggests that task and architecture are not the primary explanation for the success of DNNs in visual neuroscience [ 11 , 12 , 14 ]. Our findings are also consistent with recent work that calls into question the apparent hierarchical correspondence between DNNs and visual cortex [ 81 , 82 ]. Indeed, we found that the relationship between latent dimensionality and encoding performance generalized across layer depth, meaning that even within a single layer of a DNN hierarchy, encoding performance can widely vary as a function of latent dimensionality.…”
Section: Discussionsupporting
confidence: 93%