2020
DOI: 10.1101/2020.07.09.185116
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Topographic deep artificial neural networks reproduce the hallmarks of the primate inferior temporal cortex face processing network

Abstract: A salient characteristic of monkey inferior temporal (IT) cortex is the IT face processing network. Its hallmarks include: “face neurons” that respond more to faces than non-face objects, strong spatial clustering of those neurons in foci at each IT anatomical level (“face patches”), and the preferential interconnection of those foci. While some deep artificial neural networks (ANNs) are good predictors of IT neuronal responses, including face neurons, they do not explain those face network hallmarks. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

9
72
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 37 publications
(81 citation statements)
references
References 82 publications
(152 reference statements)
9
72
0
Order By: Relevance
“…These limitations of CNNs as models of the visual cortex are severe, and yet astonishingly, it is now well documented that when trained on a large number of categories, not only can CNNs reach human-level performance on visual classification tasks, but their single-unit responses can also predict neural, fMRI and MEG activation patterns in the visual cortex of human and non-human primates [35,[75][76][77][78][79][80]. Our choice of CNN architectures for modelling vOTC is thus conservative, and since the present models already mimic several known properties of the reading system (e.g.…”
Section: Limits Of the Present Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…These limitations of CNNs as models of the visual cortex are severe, and yet astonishingly, it is now well documented that when trained on a large number of categories, not only can CNNs reach human-level performance on visual classification tasks, but their single-unit responses can also predict neural, fMRI and MEG activation patterns in the visual cortex of human and non-human primates [35,[75][76][77][78][79][80]. Our choice of CNN architectures for modelling vOTC is thus conservative, and since the present models already mimic several known properties of the reading system (e.g.…”
Section: Limits Of the Present Modelmentioning
confidence: 99%
“…Our choice of CNN architectures for modelling vOTC is thus conservative, and since the present models already mimic several known properties of the reading system (e.g. invariance for case, word length effect, pure alexia, etc), we may hope that our networks' predictions may fit the activity evoked by written words, as well as others did for faces or objects [35,[75][76][77][78][79][80].…”
Section: Limits Of the Present Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…A useful first step in modeling the topography of IT cortex using deep neural networks (Lee et al, 2020) successfully accounted for a number of characteristics of face-selective neurons in non-human primates, including their topographic organization, by explicitly encouraging units within a layer of the network to be spatially nearer to units with correlated responses, and farther from units with uncorrelated or anti-correlated responses. Note, however, that this approach imposes topographic functional organization on the network rather than deriving it from more basic principles of cortical structure and function, such as constraints on connectivity.…”
Section: Introductionmentioning
confidence: 99%
“…In the current work, we combined the approaches of task-optimized DCNN modeling (Yamins and DiCarlo, 2016;Lee et al, 2020) with flexible connectivity-constrained architectures (Jacobs and Jordan, 1992;Plaut and Behrmann, 2011) to develop a hierarchical model of topographic organization in IT cortex. We implemented a bias towards local connectivity through minimization of an explicit wiring cost function (Jacobs and Jordan, 1992) alongside a task performance cost function.…”
Section: Introductionmentioning
confidence: 99%