2022
DOI: 10.1101/2022.01.13.476195
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contextual associations represented both in neural networks and human behavior

Abstract: Contextual associations facilitate object recognition in human vision. However, the role of context in artificial vision remains elusive as does the characteristics that humans use to define context. We investigated whether contextually related objects (bicycle-helmet) are represented more similarly in convolutional neural networks (CNNs) used for image understanding than unrelated objects (bicycle-fork). Stimuli were of objects against a white background and consisted of a diverse set of contexts (N=73). CNN… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 25 publications
1
5
0
Order By: Relevance
“…These results extend well-established demonstrations that deep neural networks represent broad characteristics of visual object recognition (e.g. Aminoff et al, 2022;Geirhos et al, 2021;Kubilius et al, 2016;Lee & Almeida, 2021;Mukherjee & Rogers, 2023;Tuli et al, 2021;Xu & Vaziri-Pashkam et al 2021;Zeman et al, 2020;Zhou et al, 2022).…”
Section: Discussionsupporting
confidence: 89%
See 4 more Smart Citations
“…These results extend well-established demonstrations that deep neural networks represent broad characteristics of visual object recognition (e.g. Aminoff et al, 2022;Geirhos et al, 2021;Kubilius et al, 2016;Lee & Almeida, 2021;Mukherjee & Rogers, 2023;Tuli et al, 2021;Xu & Vaziri-Pashkam et al 2021;Zeman et al, 2020;Zhou et al, 2022).…”
Section: Discussionsupporting
confidence: 89%
“…Strong performance for vision dimensions is relatively unsurprising given the vast exposure to image information during pre-training. Notably, functional information about objects is also, to some extent, accessible via visual information: This is shown to some extent with simpler pre-training schemes for CNNs that can learn contextual associations between objects (Aminoff et al, 2022;. Beyond visual information, the linguistic contributions of CLIP likely contribute to the improved approximation of functional information relative to vision-only models (see fig.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations