2019
DOI: 10.1038/s41593-019-0520-2
|View full text |Cite
|
Sign up to set email alerts
|

A deep learning framework for neuroscience

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

12
624
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 757 publications
(701 citation statements)
references
References 74 publications
12
624
0
Order By: Relevance
“…While we here present multiple approaches that can increase consistency (cocktail-blank, dropout, and the choice of distance measure), significant differences remain. For computational neuroscience to take full advantage of the deep learning framework (Cichy and Kaiser, 2019;Kietzmann et al, 2019a;Kriegeskorte and Douglas, 2018;Richards et al, 2019), we therefore suggest that DNNs should be treated similarly to experimental participants, as analyses should be based on groups of network instances. Representational consistency as defined here will give researchers a way to estimate the expected network variability for a given training scenario, and thereby enable them to better estimate how many networks are required to ensure that the insights drawn from them will generalize.…”
Section: Discussionmentioning
confidence: 99%
“…While we here present multiple approaches that can increase consistency (cocktail-blank, dropout, and the choice of distance measure), significant differences remain. For computational neuroscience to take full advantage of the deep learning framework (Cichy and Kaiser, 2019;Kietzmann et al, 2019a;Kriegeskorte and Douglas, 2018;Richards et al, 2019), we therefore suggest that DNNs should be treated similarly to experimental participants, as analyses should be based on groups of network instances. Representational consistency as defined here will give researchers a way to estimate the expected network variability for a given training scenario, and thereby enable them to better estimate how many networks are required to ensure that the insights drawn from them will generalize.…”
Section: Discussionmentioning
confidence: 99%
“…These results suggest that selection of the task goal is vastly more important for the resulting solutions than the particular architecture. Future work should put heavy emphasis on testing the differential effects of task goal, network architecture, and optimization procedure (Richards et al, 2019) .…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, since this long timescale 16 coincidence between two inputs is only permissive for BTSP, but does not determine the 17 sign of the plasticity, it cannot be classified as correlative in the same sense as classical 18 Hebbian or even anti-Hebbian forms of plasticity. As suggested by our network model, if 19 plateau potentials are generated by mismatch between a target instructive input and the 20 output of the local circuit, as reflected by dendritically-targeted feedback inhibition, 21 bidirectional BTSP can implement objective-based learning (48,49). In addition to 22 providing insight into the fundamental mechanisms of spatial memory formation in the 23 hippocampus, these findings suggest new directions for general theories of biological 1 learning and the development of artificial learning systems (44,50).…”
Section: D)mentioning
confidence: 99%