2022
DOI: 10.48550/arxiv.2211.04533
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Harmonizing the object recognition strategies of deep neural networks with humans

Abstract: The many successes of deep neural networks (DNNs) over the past decade have largely been driven by computational scale rather than insights from biological intelligence. Here, we explore if these trends have also carried concomitant improvements in explaining the visual strategies humans rely on for object recognition. We do this by comparing two related but distinct properties of visual strategies in humans and DNNs: where they believe important visual features are in images and how they use those features to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 60 publications
0
9
0
Order By: Relevance
“…Nonaka, Majima, Aoki, & Kamitani (2021) thus developed a "Brain Hierarchy Score" that measures similarities between hierarchical structures, applied it to 29 DNNs, and found a negative correlation between image classification performance and similarity to human vision. This finding provides a striking illustration of how DNNs can excel in performance while veering apart from human competence (see also Fel, Felipe, Linsley, & Serre 2022).…”
Section: Vision and Dnnsmentioning
confidence: 73%
“…Nonaka, Majima, Aoki, & Kamitani (2021) thus developed a "Brain Hierarchy Score" that measures similarities between hierarchical structures, applied it to 29 DNNs, and found a negative correlation between image classification performance and similarity to human vision. This finding provides a striking illustration of how DNNs can excel in performance while veering apart from human competence (see also Fel, Felipe, Linsley, & Serre 2022).…”
Section: Vision and Dnnsmentioning
confidence: 73%
“…Several recent studies have also attempted to identify what design choices lead to improved representational alignment in models (Kumar et al, 2022;Muttenthaler et al, 2022;Fel et al, 2022), although Moschella et al (2022) found that even with variation in design choices, many models trained on the same dataset end up learning similar 'relative representations' (embeddings projected into a relational form like a similarity matrix), or in other words, converge to the same representational space. Tucker et al (2022) showed that representational alignment emerges not only in static settings like image classification, but also dynamic reinforcement learning tasks involving human-robot interaction.…”
Section: Related Workmentioning
confidence: 99%
“…We define representational alignment as the degree to which the latent representations of a model match the latent representations of humans for the same set of stimuli, and refer to models that are representationally aligned with humans as being "human-aligned." Several recent papers have proposed ways to measure (Marjieh et al, 2022a;, explain (Muttenthaler et al, 2022;Kumar et al, 2022), and even improve (Peterson et al, 2018;Fel et al, 2022) the representational alignment of models. However, many models that score low on these alignment metrics still have high performance on downstream tasks like image classification (Kumar et al, 2022;Muttenthaler et al, 2022;Fel et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…While not yet widely used in bird song research, such methods reduce the dependence on engineering good features, by using spectrograms or waveforms directly as input. Deep learning can achieve impressive accuracy on various tasks, but this neither implies nor demands that their recognition strategies are similar to those of animals (Fel et al, 2022). All comparison methods make assumptions about what constitutes similarity in acoustic signals.…”
Section: Introductionmentioning
confidence: 99%