Populations of neurons in the retina, olfactory system, visual and somatosensory thalamus, and several cortical regions show temporal correlation between the discharge times of their action potentials (spike trains). Correlated firing has been linked to stimulus encoding, attention, stimulus discrimination, and motor behaviour. Nevertheless, the mechanisms underlying correlated spiking are poorly understood, and its coding implications are still debated. It is not clear, for instance, whether correlations between the discharges of two neurons are determined solely by the correlation between their afferent currents, or whether they also depend on the mean and variance of the input. We addressed this question by computing the spike train correlation coefficient of unconnected pairs of in vitro cortical neurons receiving correlated inputs. Notably, even when the input correlation remained fixed, the spike train output correlation increased with the firing rate, but was largely independent of spike train variability. With a combination of analytical techniques and numerical simulations using 'integrate-and-fire' neuron models we show that this relationship between output correlation and firing rate is robust to input heterogeneities. Finally, this overlooked relationship is replicated by a standard threshold-linear model, demonstrating the universality of the result. This connection between the rate and correlation of spiking activity links two fundamental features of the neural code.
To understand how the brain processes sensory information to guide behavior, we must know how stimulus representations are transformed throughout the visual cortex. Here we report an open, large-scale physiological survey of activity in the awake mouse visual cortex: the Allen Brain Observatory Visual Coding dataset. This publicly available dataset includes cortical activity from nearly 60,000 neurons from 6 visual areas, 4 layers, and 12 transgenic mouse lines from 243 adult mice, in response to a systematic set of visual stimuli. We classify neurons based on joint reliabilities to multiple stimuli and validate this functional classification with models of visual responses. While most classes are characterized by responses to specific subsets of the stimuli, the largest class is not reliably responsive to any of the stimuli and becomes progressively larger in higher visual areas. These classes reveal a functional organization wherein putative dorsal areas show specialization for visual motion signals. Users may view, print, copy, and download text and data-mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use:
Novel experimental techniques reveal the simultaneous activity of larger and larger numbers of neurons. As a result there is increasing interest in the structure of cooperative – or correlated – activity in neural populations, and in the possible impact of such correlations on the neural code. A fundamental theoretical challenge is to understand how the architecture of network connectivity along with the dynamical properties of single cells shape the magnitude and timescale of correlations. We provide a general approach to this problem by extending prior techniques based on linear response theory. We consider networks of general integrate-and-fire cells with arbitrary architecture, and provide explicit expressions for the approximate cross-correlation between constituent cells. These correlations depend strongly on the operating point (input mean and variance) of the neurons, even when connectivity is fixed. Moreover, the approximations admit an expansion in powers of the matrices that describe the network architecture. This expansion can be readily interpreted in terms of paths between different cells. We apply our results to large excitatory-inhibitory networks, and demonstrate first how precise balance – or lack thereof – between the strengths and timescales of excitatory and inhibitory synapses is reflected in the overall correlation structure of the network. We then derive explicit expressions for the average correlation structure in randomly connected networks. These expressions help to identify the important factors that shape coordinated neural activity in such networks.
We study how pairs of neurons transfer correlated input currents into correlated spikes. Over rapid time scales, correlation transfer increases with both spike time variability and rate; the dependence on variability disappears at large time scales. This persists for a nonlinear membrane model and for heterogeneous cell pairs, but strong nonmonotonicities follow from refractory effects. We present consequences for population coding and for the encoding of time-varying stimuli.
Summary Neural responses are noisy, and circuit structure can correlate this noise across neurons. Theoretical studies show that noise correlations can have diverse effects on population coding, but these studies rarely explore stimulus dependence of noise correlations. Here, we show that noise correlations in responses of ON-OFF direction-selective retinal ganglion cells are strongly stimulus dependent and we uncover the circuit mechanisms producing this stimulus dependence. A population model based on these mechanistic studies shows that stimulus-dependent noise correlations improve the encoding of motion direction two-fold compared to independent noise. This work demonstrates a mechanism by which a neural circuit effectively shapes its signal and noise in concert, minimizing corruption of signal by noise. Finally, we generalize our findings beyond direction coding in the retina and show that stimulus-dependent correlations will generally enhance information coding in populations of diversely tuned neurons.
Conductance-based equations for electrically active cells form one of the most widely studied mathematical frameworks in computational biology. This framework, as expressed through a set of differential equations by Hodgkin and Huxley, synthesizes the impact of ionic currents on a cell's voltage—and the highly nonlinear impact of that voltage back on the currents themselves—into the rapid push and pull of the action potential. Later studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations or their counterparts. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the equations of Hodgkin-Huxley type. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic equations of Hodgkin-Huxley type as well as to more modern models of ion channel dynamics. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly MATLAB simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html.
The random transitions of ion channels between conducting and nonconducting states generate a source of internal fluctuations in a neuron, known as channel noise. The standard method for modeling the states of ion channels nonlinearly couples continuous-time Markov chains to a differential equation for voltage. Beginning with the work of R. F. Fox and Y.-N. Lu [Phys. Rev. E 49, 3421 (1994)], there have been attempts to generate simpler models that use stochastic differential equation (SDEs) to approximate the stochastic spiking activity produced by Markov chain models. Recent numerical investigations, however, have raised doubts that SDE models can capture the stochastic dynamics of Markov chain models. We analyze three SDE models that have been proposed as approximations to the Markov chain model: one that describes the states of the ion channels and two that describe the states of the ion channel subunits. We show that the former channel-based approach can capture the distribution of channel noise and its effects on spiking in a Hodgkin-Huxley neuron model to a degree not previously demonstrated, but the latter two subunit-based approaches cannot. Our analysis provides intuitive and mathematical explanations for why this is the case. The temporal correlation in the channel noise is determined by the combinatorics of bundling subunits into channels, but the subunit-based approaches do not correctly account for this structure. Our study confirms and elucidates the findings of previous numerical investigations of subunit-based SDE models. Moreover, it presents evidence that Markov chain models of the nonlinear, stochastic dynamics of neural membranes can be accurately approximated by SDEs. This finding opens a door to future modeling work using SDE techniques to further illuminate the effects of ion channel fluctuations on electrically active cells.
Knowledge of mesoscopic brain connectivity is important for understanding inter- and intraregion information processing. Models of structural connectivity are typically constructed and analyzed with the assumption that regions are homogeneous. We instead use the Allen Mouse Brain Connectivity Atlas to construct a model of whole-brain connectivity at the scale of 100 μm voxels. The data consist of 428 anterograde tracing experiments in wild type C57BL/6J mice, mapping fluorescently labeled neuronal projections brain-wide. Inferring spatial connectivity with this dataset is underdetermined, since the approximately 2 × 105 source voxels outnumber the number of experiments. To address this issue, we assume that connection patterns and strengths vary smoothly across major brain divisions. We model the connectivity at each voxel as a radial basis kernel-weighted average of the projection patterns of nearby injections. The voxel model outperforms a previous regional model in predicting held-out experiments and compared with a human-curated dataset. This voxel-scale model of the mouse connectome permits researchers to extend their previous analyses of structural connectivity to much higher levels of resolution, and it allows for comparison with functional imaging and other datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.