Biological networks have so many possible states that exhaustive sampling is impossible. Successful analysis thus depends on simplifying hypotheses, but experiments on many systems hint that complicated, higher-order interactions among large groups of elements have an important role. Here we show, in the vertebrate retina, that weak correlations between pairs of neurons coexist with strongly collective behaviour in the responses of ten or more neurons. We find that this collective behaviour is described quantitatively by models that capture the observed pairwise correlations but assume no higher-order interactions. These maximum entropy models are equivalent to Ising models, and predict that larger networks are completely dominated by correlation effects. This suggests that the neural code has associative or error-correcting properties, and we provide preliminary evidence for such behaviour. As a first test for the generality of these ideas, we show that similar results are obtained from networks of cultured cortical neurons.
Gene activity is described by the time-series of discrete, stochastic mRNA production events. This transcriptional time-series exhibits intermittent, bursty behavior. One consequence of this temporal intricacy is that gene expression can be tuned by varying different features of the time-series. What schemes for varying the transcriptional time-series are observed in the cell? Are the observed properties of these time-series optimized for cellular function? To address these questions, we characterize mRNA copy-number statistics at single-molecule resolution from multiple Escherichia coli promoters. We find that the degree of burstiness depends only on the gene expression level, while being independent of the details of gene regulation. The observed behavior is explained by the underlying variation in the duration of bursting events. Using information theory, we find that the properties of the transcriptional time series allow the cell to efficiently map the extracellular concentration of inducer molecules to intracellular levels of mRNA and proteins.
Information is carried in the brain by the joint activity patterns of large groups of neurons. Understanding the structure and function of population neural codes is challenging because of the exponential number of possible activity patterns and dependencies among neurons. We report here that for groups of ∼100 retinal neurons responding to natural stimuli, pairwise-based models, which were highly accurate for small networks, are no longer sufficient. We show that because of the sparse nature of the neural code, the higher-order interactions can be easily learned using a novel model and that a very sparse low-order interaction network underlies the code of large populations of neurons. Additionally, we show that the interaction network is organized in a hierarchical and modular manner, which hints at scalability. Our results suggest that learnability may be a key feature of the neural code.high-order | correlations | maximum entropy | neural networks | sparseness S ensory and motor information is carried in the brain by sequences of action potentials of large populations of neurons (1-3) and, often, by correlated patterns of activity (4-11). The detailed nature of the code of neural populations, namely the way information is represented by the specific patterns of spiking and silence over a group of neurons, is determined by the dependencies among cells. For small groups of neurons, we can directly sample the full distribution of activity patterns of the population; identify all the underlying interactions, or lack thereof; and understand the design of the code (12-15). However, this approach cannot work for large networks: The number of possible activity patterns of just 100 neurons, a population size that already has clear functional implications (16), exceeds 10 30 . Thus, our understanding of the code of large neural populations depends on finding simple sets of dependencies among cells that would capture the network behavior (17)(18)(19).The success of pairwise-based models in describing the strongly correlated activity of small groups of neurons (19-25) suggests one such simplifying principle of network organization and population neural codes, which also simplifies their analysis. Using only a quadratic number of interactions, out of the exponential number of potential ones, pairwise maximum entropy models reveal that the code relies on strongly correlated network states and exhibits distributed error-correcting structure (19,21). It is unclear, however, if pairwise models are sufficient for large networks, particularly when presented with natural stimuli that contain high-order correlation structure. Here we show that in this case pairwise models capture much, but not all, of the network behavior. This implies a much more complicated structure of population codes (26,27). Because learning even pairwise models is computationally hard (28-33), this may seem to suggest that population codes would be extremely hard to learn.We show here that this is not the case for neural population codes. The sparseness of...
To understand a neural circuit completely requires simultaneous recording from most of the neurons in that circuit. Here we report recording and spike sorting techniques that enable us to record from all or nearly all of the ganglion cells in a patch of the retina. With a dense multi-electrode array, each ganglion cell produces a unique pattern of activity on many electrodes when it fires an action potential. Signals from all of the electrodes are combined with an iterative spike sorting algorithm to resolve ambiguities arising from overlapping spike waveforms. We verify that we are recording from a large fraction of ganglion cells over the array by labeling the ganglion cells with a retrogradely transported dye and by comparing the number of labeled and recorded cells. Using these methods, we show that about 60 receptive fields of ganglion cells cover each point in visual space in the salamander, consistent with anatomical findings.
Evidence from a variety of recording methods suggests that many areas of the brain are far more sparsely active than commonly thought. Here, we review experimental findings pointing to the existence of neurons which fire action potentials rarely or only to very specific stimuli. Because such neurons would be difficult to detect with the most common method of monitoring neural activity in vivo-extracellular electrode recording-they could be referred to as "dark neurons," in analogy to the astrophysical observation that much of the matter in the universe is undetectable, or dark. In addition to discussing the evidence for largely silent neurons, we review technical advances that will ultimately answer the question: how silent is the brain?
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.