2020
DOI: 10.1101/2020.07.17.209536
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Biased orientation representations can be explained by experience with non-uniform training set statistics

Abstract: Visual acuity is better for vertical and horizontal compared to other orientations. This cross-species phenomenon is often explained by "efficient coding", whereby more neurons show sharper tuning for the orientations most common in natural vision. However, it is unclear if experience alone can account for such biases. Here, we measured orientation representations in a convolutional neural network, VGG-16, trained on modified versions of ImageNet (rotated by 0, 22.5, or 45 degrees counter-clockwise of upright)… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 45 publications
1
5
0
Order By: Relevance
“…1b). This finding recapitulates our preliminary findings and concurrent work of colleagues, and points to an origin in image statistics (Benjamin et al, 2019; Henderson and Serences, 2021). However, we also found that networks trained on rotated images do partially retain sensitivity to cardinal orientations; they do not simply rotate their sensitivity by 45º (SI Fig.…”
Section: Resultssupporting
confidence: 92%
See 1 more Smart Citation
“…1b). This finding recapitulates our preliminary findings and concurrent work of colleagues, and points to an origin in image statistics (Benjamin et al, 2019; Henderson and Serences, 2021). However, we also found that networks trained on rotated images do partially retain sensitivity to cardinal orientations; they do not simply rotate their sensitivity by 45º (SI Fig.…”
Section: Resultssupporting
confidence: 92%
“…We examine two model systems. First, we show that deep artificial networks trained on natural image classification show similar patterns of sensitivity as humans, and that this is a partly a consequence of image statistics (also see Benjamin et al (2019); Henderson and Serences (2021)) but is also partially due to factors inherent in network architecture. We then leverage results from the study of linear networks to mathematically describe how gradient descent naturally causes learned representations to reflect the input statistics.…”
Section: Introductionmentioning
confidence: 86%
“…These same insights can be applied to the study of other spatial properties, like area (see Corbett & Oriet, 2011;Marchant et al, 2013;Raidvee et al, 2020;Solomon et al, 2011;Yousif, Aslin, & Keil, 2020;Yousif & Keil, 2019;Yousif & Keil, 2021a), volume (Bennette, Keil, & Yousif, 2021;Ekman & Junge, 1961;Teghtsoonian, 1965), and orientation (see Appelle, 1972;Lee et al, 2003;Girschick et al, 2011;Henderson & Serences, 2021;Sadalla & Montello, 1989;Yousif, Chen, & Scholl, 2020). In all of these cases, it may be useful to consider the possibility that any piece of information may be formatted in multiple ways simultaneously.…”
Section: Other Spatial Formatsmentioning
confidence: 93%
“…This affords testing hypotheses relating to the composition of data-dependent features. In order to explain the superiority of visual acuity for horizontal and vertical information in artificial and biological neural networks, Henderson & Serences tested whether ANNs would learn a similar bias and whether that bias depended on statistical regularities in the training datasets (Henderson & Serences, 2020). First, they showed that their pre-trained ANN did exhibit a bias toward over-representing cardinal orientations by measuring the distribution of tuning centres across neurons.…”
Section: Toy With the Brainmentioning
confidence: 99%