2020
DOI: 10.1016/j.neuron.2020.07.040
|View full text |Cite
|
Sign up to set email alerts
|

Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence

Abstract: A potentially organizing goal of the brain and cognitive sciences is to accurately explain domains of human intelligence as executable, neurally mechanistic models. Years of research have led to models that capture experimental results in individual behavioral tasks and individual brain regions. We here advocate for taking the next step: integrating experimental results from many laboratories into suites of benchmarks that, when considered together, push mechanistic models toward explaining entire domains of i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

7
176
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 160 publications
(185 citation statements)
references
References 32 publications
7
176
2
Order By: Relevance
“…In this study, we address this challenge by selecting a set of DNNs trained on the same set of training images using the same learning algorithm, with the same encoder architecture, while being optimized for different tasks. Our results, thus, complement previous studies that focussed on other factors influencing the learning of DNN parameters such as architecture 19,3537 , and the learning mechanism 3840 . Our approach accelerates the divide-and-conquer strategy of investigating human brain function by systematically and carefully manipulating the DNNs used to map the brain in their fundamental parameters one by one 20,4143 .…”
Section: Discussionsupporting
confidence: 88%
See 2 more Smart Citations
“…In this study, we address this challenge by selecting a set of DNNs trained on the same set of training images using the same learning algorithm, with the same encoder architecture, while being optimized for different tasks. Our results, thus, complement previous studies that focussed on other factors influencing the learning of DNN parameters such as architecture 19,3537 , and the learning mechanism 3840 . Our approach accelerates the divide-and-conquer strategy of investigating human brain function by systematically and carefully manipulating the DNNs used to map the brain in their fundamental parameters one by one 20,4143 .…”
Section: Discussionsupporting
confidence: 88%
“…In this study, we address this challenge by selecting a set of DNNs trained on the same set of training images using the same learning algorithm, with the same encoder architecture, while being optimized for different tasks. Our results, thus, complement previous studies that focussed on other factors influencing the learning of DNN parameters such as architecture 19,[35][36][37] , and the learning mechanism [38][39][40] A limitation of our study is that our findings are restricted to functions related to scene perception. We limited our study to scene perception because there are only a few image datasets 8,45 that have annotations corresponding to a diverse set of tasks, thus, allowing DNNs to be optimized independently on these tasks.…”
Section: Discussionsupporting
confidence: 83%
See 1 more Smart Citation
“…There have been many efforts to design reasonable criteria to judge the fitness of a model on experimental recordings (Weaver & Wearne, 2006;Druckmann et al, 2008;Jolivet, Kobayashi, et al, 2008;Jolivet, Roth, et al, 2008;Gerstner & Naud, 2009;Schrimpf et al, 2018;Schrimpf et al, 2020). However, a consensus hasn't been reached.…”
Section: Designedmentioning
confidence: 99%
“…Deep convolutional neural networks (DCNN) are a way to close this gap in knowledge by linking changes in processing to performance in a fully controlled yet statistically rich setting (Kietzmann et al, 2019a; Richards et al, 2019; Scholte, 2018;Yamins and DiCarlo, 2016). Intriguingly, these networks not only parallel human performance on some object recognition tasks (VanRullen, 2017), but they also feature processing characteristics that bear a lot of resemblance to the visual ventral stream in primates (Eickenberg et al, 2017; Güçclü and van Gerven, 2015; Khaligh-Razavi and Kriegeskorte, 2014; Kubilius et al, 2018; Schrimpf et al, 2020; Yamins et al, 2014). Leveraging this link between neural processing and performance has already enhanced insight into the potential mechanisms underlying shape perception (Kubilius et al,2016), scene segmentation (Seijdel et al, 2020) and the role of recurrence during object recognition (Kar et al, 2019; Kietzmann et al, 2019b).…”
Section: Introductionmentioning
confidence: 99%