Perception, cognition and behavior rely on flexible communication between microcircuits in distinct cortical regions. The mechanisms underlying rapid information rerouting between such microcircuits are still unknown. It has been proposed that changing patterns of coherence between local gamma rhythms support flexible information rerouting. The stochastic and transient nature of gamma oscillations in vivo, however, is hard to reconcile with such a function. Here we show that models of cortical circuits near the onset of oscillatory synchrony selectively route input signals despite the short duration of gamma bursts and the irregularity of neuronal firing. In canonical multiarea circuits, we find that gamma bursts spontaneously arise with matched timing and frequency and that they organize information flow by large-scale routing states. Specific self-organized routing states can be induced by minor modulations of background activity.
Dynamic oscillatory coherence is believed to play a central role in flexible communication between brain circuits. To test this communication-through-coherence hypothesis, experimental protocols that allow a reliable control of phase-relations between neuronal populations are needed. In this modeling study, we explore the potential of closed-loop optogenetic stimulation for the control of functional interactions mediated by oscillatory coherence. The theory of non-linear oscillators predicts that the efficacy of local stimulation will depend not only on the stimulation intensity but also on its timing relative to the ongoing oscillation in the target area. Induced phase-shifts are expected to be stronger when the stimulation is applied within specific narrow phase intervals. Conversely, stimulations with the same or even stronger intensity are less effective when timed randomly. Stimulation should thus be properly phased with respect to ongoing oscillations (in order to optimally perturb them) and the timing of the stimulation onset must be determined by a real-time phase analysis of simultaneously recorded local field potentials (LFPs). Here, we introduce an electrophysiologically calibrated model of Channelrhodopsin 2 (ChR2)-induced photocurrents, based on fits holding over two decades of light intensity. Through simulations of a neural population which undergoes coherent gamma oscillations—either spontaneously or as an effect of continuous optogenetic driving—we show that precisely-timed photostimulation pulses can be used to shift the phase of oscillation, even at transduction rates smaller than 25%. We consider then a canonic circuit with two inter-connected neural populations oscillating with gamma frequency in a phase-locked manner. We demonstrate that photostimulation pulses applied locally to a single population can induce, if precisely phased, a lasting reorganization of the phase-locking pattern and hence modify functional interactions between the two populations.
PurposeIdentification of critical areas in presurgical evaluations of patients with temporal lobe epilepsy is the most important step prior to resection. According to the “epileptic focus model”, localization of seizure onset zones is the main task to be accomplished. Nevertheless, a significant minority of epileptic patients continue to experience seizures after surgery (even when the focus is correctly located), an observation that is difficult to explain under this approach. However, if attention is shifted from a specific cortical location toward the network properties themselves, then the epileptic network model does allow us to explain unsuccessful surgical outcomes.MethodsThe intraoperative electrocorticography records of 20 patients with temporal lobe epilepsy were analyzed in search of interictal synchronization clusters. Synchronization was analyzed, and the stability of highly synchronized areas was quantified. Surrogate data were constructed and used to statistically validate the results. Our results show the existence of highly localized and stable synchronization areas in both the lateral and the mesial areas of the temporal lobe ipsilateral to the clinical seizures. Synchronization areas seem to play a central role in the capacity of the epileptic network to generate clinical seizures. Resection of stable synchronization areas is associated with elimination of seizures; nonresection of synchronization clusters is associated with the persistence of seizures after surgery.DiscussionWe suggest that synchronization clusters and their stability play a central role in the epileptic network, favoring seizure onset and propagation. We further speculate that the stability distribution of these synchronization areas would differentiate normal from pathologic cases.
The cortical microcircuit can dynamically adjust to dramatic changes in the strength, scale, and complexity of its input. In the primary visual cortex (V1), pyramidal cells (PCs) integrate widely across space when signals are weak, but integrate narrowly when signals are strong, a phenomenon known as contrast-dependent surround suppression. Theoretical work has proposed that local interneurons could mediate a shift from cooperation to competition of PCs across cortical space, underlying this computation. We combine calcium imaging and electrophysiology to constrain a stabilized superlinear network model that explains how the four principal cell types in layer 2/3 (L2/3) of mouse V1, somatostatin (SST), parvalbumin (PV), and vasoactive intestinal peptide (VIP) interneurons, and PCs, transform inputs from layer 4 (L4) PCs to encode drifting gratings of varying size and contrast. Using bidirectional optogenetic perturbations, we confirm key predictions of the model. Our data and modeling show that network nonlinearities set up by recurrent amplification mediate a shift from a positive PC-VIP feedback loop at small size and low contrast to a negative PC-SST feedback loop at large size and high contrast to support this flexible computation. This may represent a widespread mechanism for gating competition across cortical space to optimally meet task demands.
Identifying the regime in which the cortical microcircuit operates is a prerequisite to determine the mechanisms that mediate its response to stimulus. Classic modeling work has started to characterize this regime through the study of perturbations, but an encompassing perspective that links the full ensemble of the network’s response to appropriate descriptors of the cortical operating regime is still lacking. Here we develop a class of mathematically tractable models that exactly describe the modulation of the distribution of cell-type-specific calcium-imaging activity with the contrast of a visual stimulus. The model’s fit recovers signatures of the connectivity structure found in mouse visual cortex. Analysis of this structure subsequently reveal parameter-independent relations between the responses of different cell types to perturbations and each interneuron’s role in circuit-stabilization. Leveraging recent theoretical approaches, we derive explicit expressions for the distribution of responses to partial perturbations which reveal a novel, counter-intuitive effect in the sign of response functions.
An inhibition-stabilized network (ISN) is a network of excitatory and inhibitory cells at a stable fixed point of firing rates for a given input, for which the excitatory subnetwork would be unstable if inhibitory rates were frozen at their fixed point values. It has been shown that in a low-dimensional model (one unit per neuronal subtype) of an ISN with a single excitatory and single inhibitory cell type, the inhibitory unit shows a "paradoxical" response, lowering (raising) its steady-state firing rate in response to addition to it of excitatory (inhibitory) input. This has been generalized to an ISN with multiple inhibitory cell types: if input is given only to inhibitory cells, the steady-state inhibition received by excitatory cells changes paradoxically, that is, it decreases (increases) if the steady-state excitatory firing rates decrease (increase). We generalize these analyses of paradoxical effects to low-dimensional networks with multiple cell types of both excitatory and inhibitory neurons. The analysis depends on the connectivity matrix of the network linearized about a given fixed point, and its eigenvectors or "modes". We show the following: (1) A given cell type shows a paradoxical change in steady-state rate in response to input it receives, if and only if the network with that cell type omitted has an odd number of unstable modes. Excitatory neurons can show paradoxical responses when there are two or more inhibitory subtypes. (2) More generally, if the cell types are divided into two nonoverlapping subsets A and B, then subset B has an odd (even) number of modes that show paradoxical response if and only if subset A has an odd (even) number of unstable modes. (3) The net steady-state inhibition received by any unstable mode of the excitatory subnetwork will change paradoxically, i.e. in the same direction as the change in amplitude of that mode. In particular, this means that a sufficient condition to determine that a network is an ISN is if, in response to an input only to inhibitory cells, the firing rates of and inhibition received by all excitatory cell types all change in the same direction. This in turn will be true if all E cells and all inhibitory cell types that connect to E cells change their firing rates in the same direction.
1 Abstract 1 A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures 2 a hypothesized neural mechanism. Such models are valuable when they give rise to an experimen-3 tally observed phenomenon -whether behavioral or in terms of neural activity -and thus can offer 4 insights into neural computation. The operation of these circuits, like all models, critically depends 5 on the choices of model parameters. Historically, the gold standard has been to analytically derive 6 the relationship between model parameters and computational properties. However, this enterprise 7 quickly becomes infeasible as biologically realistic constraints are included into the model increas-8 ing its complexity, often resulting in ad hoc approaches to understanding the relationship between 9 model and computation. We bring recent machine learning techniques -the use of deep generative 10 models for probabilistic inference -to bear on this problem, learning distributions of parameters 11 that produce the specified properties of computation. Importantly, the techniques we introduce 12 offer a principled means to understand the implications of model parameter choices on compu-13 tational properties of interest. We motivate this methodology with a worked example analyzing 14 sensitivity in the stomatogastric ganglion. We then use it to generate insights into neuron-type 15 input-responsivity in a model of primary visual cortex, a new understanding of rapid task switch- 16 ing in superior colliculus models, and attribution of error in recurrent neural networks solving a 17 simple mathematical task. More generally, this work suggests a departure from realism vs tractabil-18 ity considerations, towards the use of modern machine learning for sophisticated interrogation of 19 biologically relevant models. 20 1 2 INTRODUCTION 2 Introduction 21The fundamental practice of theoretical neuroscience is to use a mathematical model to understand 22 neural computation, whether that computation enables perception, action, or some intermediate 23 processing [1]. A neural computation is systematized with a set of equations -the model -and 24 these equations are motivated by biophysics, neurophysiology, and other conceptual considerations. 25The function of this system is governed by the choice of model parameters, which when configured 26 in a particular way, give rise to a measurable signature of a computation. The work of analyzing a 27 model then requires solving the inverse problem: given a computation of interest, how can we reason 28 about these particular parameter configurations? The inverse problem is crucial for reasoning about 29 likely parameter values, uniquenesses and degeneracies, attractor states and phase transitions, and 30 predictions made by the model. 31 Consider the idealized practice: one carefully designs a model and analytically derives how model 32 parameters govern the computation. Seminal examples of this gold standard (which often adopt 33 approaches from statistical physics) include o...
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon -- whether behavioral or a pattern of neural activity -- and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choice of model parameters. A key step is then to identify the model parameters consistent with observed phenomena: to solve the inverse problem. In this work, we present a novel technique, emergent property inference (EPI), that brings the modern probabilistic modeling toolkit to theoretical neuroscience. When theorizing circuit models, theoreticians predominantly focus on reproducing computational properties rather than a particular dataset. Our method uses deep neural networks to learn parameter distributions with these computational properties. This methodology is introduced through a motivational example of parameter inference in the stomatogastric ganglion. EPI is then shown to allow precise control over the behavior of inferred parameters and to scale in parameter dimension better than alternative techniques. In the remainder of this work, we present novel theoretical findings in models of primary visual cortex and superior colliculus, which were gained through the examination of complex parametric structure captured by EPI. Beyond its scientific contribution, this work illustrates the variety of analyses possible once deep learning is harnessed towards solving theoretical inverse problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.