Traditional approaches to neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task--extracting information about an unknown time-dependent stimulus from short segments of a spike train. Here the neural code was characterized from the point of view of the organism, culminating in algorithms for real-time stimulus estimation based on a single example of the spike train. These methods were applied to an identified movement-sensitive neuron in the fly visual system. Such decoding experiments determined the effective noise level and fault tolerance of neural computation, and the structure of the decoding algorithms suggested a simple model for real-time analog signal processing with spiking neurons.
Adaptation is a widespread phenomenon in nervous systems, providing flexibility to function under varying external conditions. Here, we relate an adaptive property of a sensory system directly to its function as a carrier of information about input signals. We show that the input/output relation of a sensory system in a dynamic environment changes with the statistical properties of the environment. Specifically, when the dynamic range of inputs changes, the input/output relation rescales so as to match the dynamic range of responses to that of the inputs. We give direct evidence that the scaling of the input/output relation is set to maximize information transmission for each distribution of signals. This adaptive behavior should be particularly useful in dealing with the intermittent statistics of natural signals.
The major problem in information theoretic analysis of neural responses and other biological data is the reliable estimation of entropy-like quantities from small samples. We apply a recently introduced Bayesian entropy estimator to synthetic data inspired by experiments, and to real experimental spike trains. The estimator performs admirably even very deep in the undersampled regime, where other techniques fail. This opens new possibilities for the information theoretic analysis of experiments, and may be of general interest as an example of learning from limited data.
We assessed the performance of a synapse that transmits small, sustained, graded potentials between two classes of second-order ocellar "L-neurons" of the locust. We characterized the transmission of both fixed levels of membrane potential and fluctuating signals by recording postsynaptic responses to changes in presynaptic potential. To ensure repeatability between stimuli, we controlled presynaptic signals with a voltage clamp. We found that the synapse introduces noise above the level of background activity in the postsynaptic neuron. By driving the presynaptic neuron with slow-ramp changes in potential, we found that the number of discrete signal levels the synapse transmits is ϳ20. It can also transmit ϳ20 discrete levels when the presynaptic signal is a graded rebound spike. Synaptic noise level is constant over the operating range of the synapse, which would not be expected if presynaptic potential set the probability for the release of individual quanta of neurotransmitter according to Poisson statistics. Responses to individual quanta of neurotransmission could not be resolved, which is consistent with a synapse that operates with large numbers of vesicles evoking small responses. When challenged with white noise stimuli, the synapse can transmit information at rates up to 450 bits/s, a performance that is sufficient to transmit natural signals about changes in illumination.
We develop model-independent methods for characterizing the information carried by particular features of a neural spike train as it encodes continuously varying stimuli. These methods consist, in essence, of an inverse statistical approach; instead of asking for the statistics of neural responses to a given stimulus we describe the probability distribution of stimuli that give rise to a certain short pattern of spikes. These ‘response-conditional ensembles’ contain all the information about the stimulus that a hypothetical observer of the spike train may obtain. The structure of these distributions thus provides a quantitative picture of the neural code, and certain integrals of these distributions determine the absolute information in bits carried by a given spike sequence. These methods are applied to a movement-sensitive neuron (H1) in the visual system of the blowfly Calliphora erythrocephala . The stimulus is chosen as the time-varying angular velocity of a (spatially) random pattern, and we consider segments of the spike train of up to three spikes with specified spike-intervals. We demonstrate that, with extensive analysis, a single experiment of roughly one hour’s duration is sufficient to provide reliable estimates of the relevant probability distributions. From the experimentally determined probability distributions we are able to draw several conclusions. (1) Under the conditions of our experiment, observation of a single spike carries roughly 0.36 bits of information, but spike pairs carry an interval-dependent signal that can be much larger than 0.72 bits; estimates of the total information capacity are in rough agreement with the maximum possible capacity given the signal-to-noise characteristics of the photoreceptors. (2) On average a single spike signals the occurrence of a velocity waveform that is positive (movement in the excitatory direction) at all times before the spike, whereas spike pairs can signal both positive and negative velocities, depending on the inter-spike interval. (3) Although inter-spike intervals are crucial in extracting all the coded information, the code is robust to several millisecond errors in the estimate of spike arrival times. (4) Short spike sequences give reliable information about specific features of the stimulus waveform, and this specificity can be quantified. (5) Our results suggest approximate strategies for reading the neural code – reconstructing the stimulus from observations of the spike train – and some preliminary reconstructions are presented. Some tentative attempts are made to relate our results to the more general questions of coding and computation in the nervous system.
In the past decade, a small corner of the fly's visual system has become an important testing ground for ideas about coding and computation in the nervous system. A number of results demonstrate that this system operates with a precision and efficiency near the limits imposed by physics, and more generally these results point to the reliability and efficiency of the strategies that nature has selected for representing and processing visual signals. A recent series of papers by Egelhaaf and coworkers, however, suggests that almost all these conclusions are incorrect. In this contribution we place these controversies in a larger context, emphasizing that the arguments are not just about flies, but rather about how we should quantify the neural response to complex, naturalistic inputs. As an example, Egelhaaf et al. (and many others) compute certain correlation functions and use the apparent correlation times as a measure of temporal precision in the neural response. This analysis neglects the structure of the correlation function at short times, and we show how to analyze this structure to reveal a temporal precision 30 times better than suggested by the correlation time; this precision is confirmed by a much more detailed information theoretic analysis. In reviewing other aspects of the controversy, we find that the analysis methods used by Egelhaaf et al. suffer from some mathematical inconsistencies, and that in some cases we are unable to reproduce their experimental results. Finally, we present results from new experiments that probe the neural response to inputs that approach more closely the natural context for freely flying flies. These new experiments demonstrate that the fly's visual system is even more precise and efficient under natural conditions than had been inferred from our earlier work.
Statistical properties of spike trains measured from a sensory neuron in-vivo are studied experimentally and theoretically. Experiments are performed on an identified neuron in the visual system of the blowfly. It is shown that the spike trains exhibit universal behavior over short time, modulated by a stimulus-dependent envelope over long time. A model of the neuron as a nonlinear oscillator driven by noise and an external stimulus, is suggested to account for these results. The model enables a theoretic distinction of the effects of internal neuronal properties from effects of external stimulus properties, and their identification in the measured spike trains. The universal regime is characterized by one dimensionless parameter, representing the internal degree of irregularity, which is determined both by the sensitivity of the neuron and by the properties of the noise. The envelope is related in a simple way to properties of the input stimulus as seen through nonlinearity of the neural response. Explicit formulas are derived for different statistical properties in both the universal and the stimulus-dependent regimes. These formulas are in very good agreement with the data in both regimes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.