2020
DOI: 10.1101/2020.06.15.148114
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Untangling stability and gain modulation in cortical circuits with multiple interneuron classes

Abstract: Synaptic inhibition is the mechanistic backbone of a suite of cortical functions, not the least of which is maintaining overall network stability as well as modulating neuronal gain. Past cortical models have assumed simplified recurrent networks in which all inhibitory neurons are lumped into a single effective pool. In such models the mechanics of inhibitory stabilization and gain control are tightly linked in opposition to one another -meaning high gain coincides with low stability and vice versa. This teth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

8
37
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(47 citation statements)
references
References 132 publications
(356 reference statements)
8
37
2
Order By: Relevance
“…Furthermore, we establish a mathematical link between cell-type-specific response to perturbation and sub-circuit stability. By implementing those insights in these data-compatible models we provide new evidence, aligned with convergent experimental 15 and theoretical 33 arguments, that PV interneurons play a major role in circuit stabilization. Subsequently, we build upon the link between low-dimensional and high-dimensional models provided by mean-field theory 26,35 , and are able to construct a family of high-dimensional rate models that fit the experimentally observed distribution of activity of each cell-type and its dependence on contrast.…”
Section: Introductionmentioning
confidence: 57%
See 3 more Smart Citations
“…Furthermore, we establish a mathematical link between cell-type-specific response to perturbation and sub-circuit stability. By implementing those insights in these data-compatible models we provide new evidence, aligned with convergent experimental 15 and theoretical 33 arguments, that PV interneurons play a major role in circuit stabilization. Subsequently, we build upon the link between low-dimensional and high-dimensional models provided by mean-field theory 26,35 , and are able to construct a family of high-dimensional rate models that fit the experimentally observed distribution of activity of each cell-type and its dependence on contrast.…”
Section: Introductionmentioning
confidence: 57%
“…3 b,c ). We conclude that a first novel descriptor of the operating regime of circuits with multiple cell-types, is PV-stabilization, meaning that potential circuit instability is stabilized by PV cells and not by SOM cells (see also 33 ). This describes most of our models that are consistent with the data, but about 20% show paradoxical SOM response at higher contrasts (in addition to paradoxical PV response) and so are stabilized by the combination of PV and SOM and not by either alone.…”
Section: Resultsmentioning
confidence: 82%
See 2 more Smart Citations
“…To study the impact of our learning rule on network performance and dissect the effects of its different components, we train RSNNs using five different approaches for each task. These, illustrated in 3, are as follows: (i) BPTT, which updates weights using exact gradients shown in Figure 3ai; (ii) E-prop [23], the state-of-the-art method for biologically plausible training of RSNNs, shown Figure 3aii; (iii) TRTRL, the truncated RTRL given in (7) without the cell-type approximation, shown in Figure 3aiii; (iv) MDGL, which incorporates the cell type approximation given in (11) and (12) using only two cell types, shown in Figure 3aiv; (v) NL-MDGL, a nonlocal version of MDGL, where the gain is replaced by w αβ =< w jp > j∈α,p∈β even for w jp = 0 so that the modulatory signal diffuses to all cells in the network, shown in Figure 3av. We note that factor ∂E ∂zj,t , which depends on future errors as mentioned earlier, participates in the generation of all training results pertaining to MDGL in the main text ( Figures 4-7); in supplementary materials, we derive an online approximation to MDGL and demonstrate (via simulation) that it does not lead to significant performance degradation ( Figure S3, Section S3.2).…”
Section: Simulation Of Multidigraph Learning In Rsnnsmentioning
confidence: 99%