In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain ‘ground truth’ data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes.
Recent experimental results based on multi-electrode and imaging techniques have reinvigorated the idea that large neural networks operate near a critical point, between order and disorder [1, 2]. However, evidence for criticality has relied on the definition of arbitrary order parameters, or on models that do not address the dynamical nature of network activity. Here we introduce a novel approach to assess criticality that overcomes these limitations, while encompassing and generalizing previous criteria. We find a simple model to describe the global activity of large populations of ganglion cells in the rat retina, and show that their statistics are poised near a critical point. Taking into account the temporal dynamics of the activity greatly enhances the evidence for criticality, revealing it where previous methods would not. The approach is general and could be used in other biological networks.Complex brain functions usually involve large numbers of neurons interacting in diverse ways and spanning a wide range of time and length scales. At first sight, systems of inanimate matter seem to enjoy more regular properties, but they may also display complex and heterogeneous behaviors when in a critical state, which corresponds to special points of the parameter space. Thinking about the brain as a system near a critical point has been an attractive idea, which has gained attention after the suggestion that such critical states could be achieved in a self-organized manner, without fine-tuning [3], but also the proposal that operating near a critical point could be beneficial for computation [4].Despite considerable work on the foundations of a theory of critical neural networks (see [5, 6] for recent examples), the validation of these ideas by experimental data has proven difficult, largely because it requires to measure the detailed activity of large populations of neurons. Recent progress has been made possible by the advance of multi-electrode or imaging techniques, which have helped detect signatures of criticality in a variety of neural contexts. Two lines of empirical evidence, rooted in different approaches to critical systems, have been followed, albeit with little intersection. In line with the original ideas of self-organised criticality and branching processes, the statistics of neural avalanches in cortical layers has been shown to display power-law statistics [7][8][9][10]. This observation is indicative of the critical nature of the system's dynamics, but it relies on arbitrary choices, such as the number of units considered, the minimal silence time to call the end of an avalanche, or the definition of a neural event itself. The stability exponents of the neural dynamics, which become positive at the transition to chaos, have also been used as signatures of criticality [11]. This criterion relies on a continuous description of neural activity, which is inappropriate for codes relying on combinations of spikes and silences. Both these approaches address the dynamical aspect of criticality. They re...
In the early visual system, cells of the same type perform the same computation in different places of the visual field. How these cells code together a complex visual scene is unclear. A common assumption is that cells of a single-type extract a single-stimulus feature to form a feature map, but this has rarely been observed directly. Using large-scale recordings in the rat retina, we show that a homogeneous population of fast OFF ganglion cells simultaneously encodes two radically different features of a visual scene. Cells close to a moving object code quasilinearly for its position, while distant cells remain largely invariant to the object’s position and, instead, respond nonlinearly to changes in the object’s speed. We develop a quantitative model that accounts for this effect and identify a disinhibitory circuit that mediates it. Ganglion cells of a single type thus do not code for one, but two features simultaneously. This richer, flexible neural map might also be present in other sensory systems.
The vertebrate visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks. This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system.
One of the most striking aspects of early visual processing in the retina is the immediate parcellation of visual information into multiple parallel pathways, formed by different retinal ganglion cell types each tiling the entire visual field. Existing theories of efficient coding have been unable to account for the functional advantages of such cell-type diversity in encoding natural scenes. Here we go beyond previous theories to analyze how a simple linear retinal encoding model with different convolutional cell types efficiently encodes naturalistic spatiotemporal movies given a fixed firing rate budget. We find that optimizing the receptive fields and cell densities of two cell types makes them match the properties of the two main cell types in the primate retina, midget and parasol cells, in terms of spatial and temporal sensitivity, cell spacing, and their relative ratio. Moreover, our theory gives a precise account of how the ratio of midget to parasol cells decreases with retinal eccentricity. Also, we train a nonlinear encoding model with a rectifying nonlinearity to efficiently encode naturalistic movies, and again find emergent receptive fields resembling those of midget and parasol cells that are now further subdivided into ON and OFF types. Thus our work provides a theoretical justification, based on the efficient coding of natural movies, for the existence of the four most dominant cell types in the primate retina that together comprise 70% of all ganglion cells. * Equal contribution. All code available at https://github.com/ganguli-lab/RetinalCellTypes. † Corresponding authors: samocko@gmail.com and stephane.deny.pro@gmail.com.32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.
Understanding how assemblies of neurons encode information requires recording large populations of cells in the brain. In recent years, multi-electrode arrays and large silicon probes have been developed to record simultaneously from hundreds or thousands of electrodes packed with a high density. However, these new devices challenge the classical way to do spike sorting. Here we developed a new method to solve these issues, based on a highly automated algorithm to extract spikes from extracellular data, and show that this algorithm reached near optimal performance both in vitro and in vivo. The algorithm is composed of two main steps: 1) a "template-finding" phase to extract the cell templates, i.e. the pattern of activity evoked over many electrodes when one neuron fires an action potential; 2) a "template-matching" phase where the templates were matched to the raw data to find the location of the spikes. The manual intervention by the user was reduced to the minimal, and the time spent on manual curation did not scale with the number of electrodes. We tested our algorithm with large-scale data from in vitro and in vivo recordings, from 32 to 4225 electrodes. We performed simultaneous extracellular and patch recordings to obtain "ground truth" data, i.e. cases where the solution to the sorting problem is at least partially known. The performance of our algorithm was always close to the best expected performance. We thus provide a general solution to sort spikes from large-scale extracellular recordings.
A major challenge in sensory neuroscience is to understand how complex stimuli are collectively encoded by neural circuits. In particular, the role of correlations between output neurons is still unclear. Here we introduce a general strategy to equip an arbitrary model of stimulus encoding by single neurons with a network of couplings between output neurons. We develop a method for inferring both the parameters of the encoding model and the couplings between neurons from simultaneously recorded retinal ganglion cells. The inference method fits the couplings to accurately account for noise correlations, without affecting the performance in predicting the mean response. We demonstrate that the inferred couplings are independent of the stimulus used for learning, and can be used to predict the correlated activity in response to more complex stimuli. The model offers a powerful and precise tool for assessing the impact of noise correlations on the encoding of sensory information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.