2021
DOI: 10.48550/arxiv.2104.13714
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Algonauts Project 2021 Challenge: How the Human Brain Makes Sense of a World in Motion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…We focus on predicting the brain response from the corresponding video stimuli. 40 We adopt the general voxel-wise neural encoding framework that has been widely used in the literature. [41][42][43] In particular, DNN models are used to extract feature representations from each individual video stimulus.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We focus on predicting the brain response from the corresponding video stimuli. 40 We adopt the general voxel-wise neural encoding framework that has been widely used in the literature. [41][42][43] In particular, DNN models are used to extract feature representations from each individual video stimulus.…”
Section: Resultsmentioning
confidence: 99%
“…Details on data acquisition and preprocessing are provided elsewhere. 40 Briefly, the dataset consists of 1102 fMRI brain responses per subject (10 subjects), 1000 for training, and 102 held out for online submission. Each stimulus is a 3-second clip of daily events, participants watched the video without playing the sound.…”
Section: Methodsmentioning
confidence: 99%
“…Higher values correspond to higher quality. conference (Cichy et al, 2021;Naselaris et al, 2018). For the challenge, participants submit the predictions of their computational model on held-out brain data (see http://algonauts.csail.mit.edu/challenge.html for the final challenge leaderboard and details).…”
Section: Cnr -Contrast Tomentioning
confidence: 99%
“…In pursuit of interdisciplinary and transparent research, we used portions of BMD in the The Algonauts Project 2021: How the Human Brain Makes Sense of a World in Motion. This open challenge, in partnership with the Computational Cognitive Neuroscience (CCN) conference(Cichy et al, 2021;Naselaris et al, 2018), invites participants to predict held-out brain data using their computational models. The top three entries in The Algonauts Project 2021 challenge each took drastically different modeling approaches (see reports in Supplementary), highlighting the creative space opened by BMD lying at the intersection of natural and artificial intelligence research.For a full account of visual event understanding, research needs to look beyond the classical visual brain and into the whole brain, now possible with BMD.…”
mentioning
confidence: 99%
“…We formulate three desiderata for a suitable model of scene categorization: It should predict (1) the neural representations underlying scene categorization, (2) human scene categorization behavior, and (3) their relationship. A potential candidate class for the model are deep convolutional neural networks, which have been shown to predict activity in the visual cortex better than other models (Cichy et al, 2021 ; Schrimpf et al, 2020 ; Kietzmann et al, 2019 ; Yamins et al, 2014 ). A particular instantiation, a recurrent convolutional neural network (RCNN) named BLnet, that is, a model with learned bottom–up as well as lateral connectivity, has been shown to predict RTs in an object categorization task well and better than a range of control models (Spoerer, Kietzmann, Mehrer, Charest, & Kriegeskorte, 2020 ).…”
Section: Introductionmentioning
confidence: 99%