2020
DOI: 10.1371/journal.pcbi.1008215
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision

Abstract: Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale vis… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
101
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 89 publications
(109 citation statements)
references
References 56 publications
4
101
0
Order By: Relevance
“…It is also consistent with recent empirical results indicating that the main change induced by literacy is a sensitivity to letter shapes and their precise locations [40,41]. As also concluded by others [35], the simulation of recurrent interactions may be needed to capture the full temporal profile of word-related activations and to mimic fMRI signals.…”
Section: Responses To a Hierarchy Of Word-like Stimulisupporting
confidence: 89%
See 2 more Smart Citations
“…It is also consistent with recent empirical results indicating that the main change induced by literacy is a sensitivity to letter shapes and their precise locations [40,41]. As also concluded by others [35], the simulation of recurrent interactions may be needed to capture the full temporal profile of word-related activations and to mimic fMRI signals.…”
Section: Responses To a Hierarchy Of Word-like Stimulisupporting
confidence: 89%
“…A failure to capture some of these properties with a simple feedforward convolutional neural network would be interesting inasmuch as it may point to the need for additional properties, for instance recurrent and/or top-down connections [12,14,31,35].…”
Section: Aims Of the Present Studymentioning
confidence: 99%
See 1 more Smart Citation
“…The observation of increased differences with increasing network depth is in line with findings from the domain of machine learning that compared network representations using methods related to CCA (svCCA 6 , pwCCA 7 , and CKA 8 ). Although further experiments are required, we expect our results to generalize to representations learned by (unrolled) recurrent neural network architectures 25 , 26 , if not explicitly constrained 27 . For an investigation of recurrent neural network dynamics arising from various network architectures, see Maheswaranathan et al 28 .…”
Section: Discussionmentioning
confidence: 77%
“…As a step in the direction of increasing the biological plausibility of deep network architectures, we here designed vNet such that the model receptive field sizes mirror the progression of foveal receptive field sizes across the human visual hierarchy. Future work should explore in how far the interplay of ecoset and the introduction of further biological details, such as recurrence ( 30 – 34 ), skip connections, and more biologically more realistic learning rules can further improve model predictions ( 6 , 8 ). Another aspect worth considering is the learning objective.…”
Section: Discussionmentioning
confidence: 99%