2019
DOI: 10.1073/pnas.1905544116
|View full text |Cite
|
Sign up to set email alerts
|

Recurrence is required to capture the representational dynamics of the human visual system

Abstract: The human visual system is an intricate network of brain regions that enables us to recognize the world around us. Despite its abundant lateral and feedback connections, object processing is commonly viewed and studied as a feedforward process. Here, we measure and model the rapid representational dynamics across multiple stages of the human ventral stream using time-resolved brain imaging and deep learning. We observe substantial representational transformations during the first 300 ms of processing within an… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

23
313
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 313 publications
(340 citation statements)
references
References 63 publications
23
313
0
Order By: Relevance
“…In part, this can be attributed to the fact that this distance measure is sensitive to differences in overall network activation magnitudes, which may overshadow more nuanced pattern dissimilarities, in line with the lower consistency observed for norm-standardizing Euclidean distances (unit-length pattern based Euclidean distance). Although further experiments are required, we expect our results to generalize to representations learned by (unrolled) recurrent neural network architectures (Kar et al, 2019;Spoerer et al, 2019), if not explicitly constrained (Kietzmann et al, 2019b). For an investigation of recurrent neural network dynamics arising from various network architectures see Maheswaranathan et al (2019).…”
Section: Discussionmentioning
confidence: 64%
“…In part, this can be attributed to the fact that this distance measure is sensitive to differences in overall network activation magnitudes, which may overshadow more nuanced pattern dissimilarities, in line with the lower consistency observed for norm-standardizing Euclidean distances (unit-length pattern based Euclidean distance). Although further experiments are required, we expect our results to generalize to representations learned by (unrolled) recurrent neural network architectures (Kar et al, 2019;Spoerer et al, 2019), if not explicitly constrained (Kietzmann et al, 2019b). For an investigation of recurrent neural network dynamics arising from various network architectures see Maheswaranathan et al (2019).…”
Section: Discussionmentioning
confidence: 64%
“…This provides one motivation for reweighting features -since we know that our measurement processes can introduce bias in feature sampling, requiring a model to match the measured prevalence of features might be too strict a criterion. Instead of reweighting existing features that emerge via task-training, researchers have recently started using data from the human ventral stream to directly learn the network features themselves in end-to-end training on natural stimuli (Kietzmann et al, 2019;Seeliger et al, 2019). Such procedures serve the important function of verifying that a given network architecture chosen is in principle capable of mirroring the right representational transitions observed in the brain.…”
Section: Reweighting Features Improves Hit Correspondence and Revealmentioning
confidence: 99%
“…Recently, this topic has received a lot of attention in the computational neuroscience literature. In fact, deep convolutional neural networks have emerged as successful models of the ventral stream (Yamins et al 2014), and authors investigating the limitations of purely feedforward architectures within this family have proposed including temporal dynamics and adaptive mechanisms (Vinken et al 2019), or recurrent computations (Kar et al 2019;Kietzmann et al 2019;Tang et al 2018). Indeed, it has been suggested that convolutional networks that excel in object recognition need to be very deep simply to approximate operations that could be implemented more efficiently by recurrent architectures (Kar et al 2019;Kubilius et al 2018;Liao and Poggio 2016).…”
Section: Discussionmentioning
confidence: 99%
“…Recently it has been shown that models of the ventral stream based on deep convolutional neural networks can be improved in their predictive power for perception and neural activity by including adaptation mechanisms (Vinken et al 2019) and recurrent processing (Kar et al 2019;Kietzmann et al 2019;Tang et al 2018). Moreover, a progressive increase of the importance of intrinsic processing along the ventral stream may be expected, given that intrinsic temporal scales increase along various cortical hierarchies in primates and rodents (Chaudhuri et al 2015;Himberger et al 2018;Murray et al 2014;Runyan et al 2017).…”
Section: Introductionmentioning
confidence: 99%