2022
DOI: 10.1017/s0140525x22002813
|View full text |Cite
|
Sign up to set email alerts
|

Deep problems with neural network models of human vision

Abstract: Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job in predicting brain signa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
33
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 77 publications
(52 citation statements)
references
References 499 publications
(622 reference statements)
0
33
1
Order By: Relevance
“…We have argued for this approach in relation to making inferences about mechanistic similarity between DNNs and humans [29]. In fact, research relating DNNs to human vision provides a striking case of a disconnect between RSA and behavioural findings from psychology [29][30][31]. The findings here may explain contradictory RSA scores between DNNs and human visual processing as pointed out by Xu and Vaziri-Pashkam [20].…”
Section: Discussionmentioning
confidence: 60%
See 1 more Smart Citation
“…We have argued for this approach in relation to making inferences about mechanistic similarity between DNNs and humans [29]. In fact, research relating DNNs to human vision provides a striking case of a disconnect between RSA and behavioural findings from psychology [29][30][31]. The findings here may explain contradictory RSA scores between DNNs and human visual processing as pointed out by Xu and Vaziri-Pashkam [20].…”
Section: Discussionmentioning
confidence: 60%
“…However, another approach is more tractable: conduct controlled experiments to establish whether the two systems are representing information in similar ways. We have argued for this approach in relation to making inferences about mechanistic similarity between DNNs and humans [29]. In fact, research relating DNNs to human vision provides a striking case of a disconnect between RSA and behavioural findings from psychology [29][30][31].…”
Section: Discussionmentioning
confidence: 99%
“…This article joins the chorus of many other calls for better theory and metatheory (e.g., Bowers et al, 2022;Firestone, 2020;Funke et al, 2020;Jonas & Kording, 2017;Geirhos et al, 2020;Ma & Peters, 2020) -but we clarify, extend, and substantiate the argument (a) by describing, and formalizing, the discursive pattern of inferences found in the cognitive computational (neuro)sciences, by using a formal logical framework we dub a metatheoretical calculus, (b) by demonstrating how behavior, as evidenced in the literature in the form of natural language statements, when formalized can comprise a common logical fallacy, and (c) by analyzing the consequences of our metatheoretical calculus on how scientists working in the cognitive (neuro)sciences discuss and frame inferences in experimental and theoretical settings. We conclude by offering a synthesis on scientific reasoning and the desiderata for improving our inferential practice.…”
mentioning
confidence: 72%
“…However, the similarities between artificial and biological neural networks are rather superficial (see Gurney, 2018). The fact that artificial neural networks mimic instances of complex behavior could just be a byproduct of their complexity and clever engineering techniques (Bowers et al., 2022). A common criticism is that neural network models end up as opaque as the brain itself.…”
Section: Optimality As a Seal Of Approvalmentioning
confidence: 99%