2020
DOI: 10.1101/2020.12.18.423348
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inferring brain-wide interactions using data-constrained recurrent neural network models

Abstract: Behavior arises from the coordinated activity of numerous anatomically and functionally distinct brain regions. Modern experimental tools allow unprecedented access to large neural populations spanning many interacting regions brain-wide. Yet, understanding such large-scale datasets necessitates both scalable computational models to extract meaningful features of interregion communication and principled theories to interpret those features. Here, we introduce Current-Based Decomposition (CURBD), an approach fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 36 publications
(36 citation statements)
references
References 91 publications
0
27
0
Order By: Relevance
“…On the other hand, it is also possible that certain biological constraints are essential for considering why the PFC and ACC consist of multiple subregions with some degree of functional specialisation-for example, this may be necessary to constrain wiring length within the circuit. Such a hypothesis could again be explored in silico, by training networks to approximate multi-region neuronal data (Perich et al, 2020) while introducing sparsifying constraints on network connections into the cost function used to train the network.…”
Section: Directionsmentioning
confidence: 99%
“…On the other hand, it is also possible that certain biological constraints are essential for considering why the PFC and ACC consist of multiple subregions with some degree of functional specialisation-for example, this may be necessary to constrain wiring length within the circuit. Such a hypothesis could again be explored in silico, by training networks to approximate multi-region neuronal data (Perich et al, 2020) while introducing sparsifying constraints on network connections into the cost function used to train the network.…”
Section: Directionsmentioning
confidence: 99%
“…But other available directed FC methods could in principle be applied. Promising alternatives are an efficient implementation of the popular dynamical causal modelling (DCM) (Frässle et al, 2021), and artificial neural network modeling approaches such as the mesoscale individualized neurodynamic modeling (MINDy) (Singh et al, 2020) and current-based decomposition (CURBD) (Perich et al, 2020). These methods use different causal principles and estimation techniques other than the PC algorithm (and related Bayes networks methods (Ramsey et al, 2011(Ramsey et al, , 2017), and thus offer future opportunities to explore the robustness and diversity of mechanistic actflow explanations across diverse FC procedures.…”
Section: Discussionmentioning
confidence: 99%
“…For example, in X → Z ← Y, Z = Z Pa(Z) = ZX X + ZY Y, such that the estimate for ZX is the weight for the directed connection X → Z, and equivalently for ZY . Doing this for every region outputs a FC network, where each directed connection X → Y has a causal interpretation in the sense that, keeping all other regions fixed, a change of one unit in X will cause a change of YX in Y (Pearl, 2000;Spirtes et al, 2000;Woodward, 2005). In activity flow models, using a directed FC network implies predicting task-related activity for a held-out region using only its putative causal sources (Figure 1H).…”
Section: Pc Algorithmmentioning
confidence: 99%
“…However, all of these methods are based on the correlational estimates of neural interactions, and they cannot accurately deal with the problem of confounders like common input or recurrent connectivity [66]. Perich and Rajan recently proposed a novel approach to describe the interactions of neural networks using data-driven RNN modeling [67]. This technique trains RNN models to match not only the final outputs with target outputs, but also the network activity with 'teacher' activity, which is experimentally recorded.…”
Section: Perspectives For Rnn Modeling Of Hyper-adaptabilitymentioning
confidence: 99%
“…One limitation of network modeling is that training algorithms are not biologically plausible. Trained RNNs can recapitulate the animal behaviors and neural dynamics, but most of the previous studies used biologically less feasible algorithms, such as backpropagation through time [73], transfer learning [74], and data-driven RNN modeling [67]. This prevented us from considering the time-course of the learning process with regard to the adaptation of animals.…”
Section: Future Direction To Understand the Neural Mechanisms For Hyper-adaptabilitymentioning
confidence: 99%