The response of a cortical cell to a repeated stimulus can be highly variable from one trial to the next. Much lower variability has been reported of retinal cells. We recorded visual responses simultaneously from three successive stages of the cat visual system: retinal ganglion cells (RGCs), thalamic (LGN) relay cells, and simple cells in layer 4 of primary visual cortex. Spike count variability was lower than that of a Poisson process at all three stages but increased at each stage. Absolute and relative refractory periods largely accounted for the reliability at all three stages. Our results show that cortical responses can be more reliable than previously thought. The differences in reliability in retina, LGN, and cortex can be explained by (1) decreasing firing rates and (2) decreasing absolute and relative refractory periods.
The amount of information a sensory neuron carries about a stimulus is directly related to response reliability. We recorded from individual neurons in the cat lateral geniculate nucleus (LGN) while presenting randomly modulated visual stimuli. The responses to repeated stimuli were reproducible, whereas the responses evoked by nonrepeated stimuli drawn from the same ensemble were variable. Stimulus-dependent information was quantified directly from the difference in entropy of these neural responses. We show that a single LGN cell can encode much more visual information than had been demonstrated previously, ranging from 15 to 102 bits/sec across our sample of cells. Information rate was correlated with the firing rate of the cell, for a consistent rate of 3.6 Ϯ 0.6 bits/spike (mean Ϯ SD). This information can primarily be attributed to the high temporal precision with which firing probability is modulated; many individual spikes were timed with better than 1 msec precision. We introduce a way to estimate the amount of information encoded in temporal patterns of firing, as distinct from the information in the time varying firing rate at any temporal resolution. Using this method, we find that temporal patterns sometimes introduce redundancy but often encode visual information. The contribution of temporal patterns ranged from Ϫ3.4 to ϩ25.5 bits/sec or from Ϫ9.4 to ϩ24.9% of the total information content of the responses.Key words:LGN; neural coding; information theory; entropy; white noise; reliability; variability Cells in the lateral geniculate nucleus of the thalamus (LGN) respond to spatial and temporal changes in light intensity within their receptive fields. The collective responses of many such cells constitute the input to visual cortex. All stimulus discrimination at the perceptual level must ultimately be supported by reliable differences in the neural response at the level of the LGN cell population. We are therefore interested in measuring the statistical discriminability of LGN responses elicited by different visual stimuli.It has been shown that the LGN can respond to visual stimuli with remarkable temporal precision (Reich et al., 1997). This implies that LGN neurons have the capability to signal information at high rates. Previous estimates of the information in LGN responses have used two general approaches. The first approach, stimulus reconstruction, relies on an explicit model of what the neuron is encoding, as well as an algorithm for decoding it (Bialek et al., 1991;Rieke et al., 1997). This method has been used to place lower bounds on the information encoded by single neurons (Reinagel et al., 1999) or pairs of neurons (Dan et al., 1998) in the LGN in response to dynamic visual stimuli.The second approach, the "direct" method, relies instead on statistical properties of the responses to different stimuli (the entropy of the responses). Because this involves only comparisons of spike trains, without reference to stimulus parameters, we need not know what features of the stimulus the cell ...
Decoding visual information from a population of retinal ganglion cells. J. Neurophysiol. 78: 2336-2350, 1997. This work investigates how a time-dependent visual stimulus is encoded by the collective activity of many retinal ganglion cells. Multiple ganglion cell spike trains were recorded simultaneously from the isolated retina of the tiger salamander using a multielectrode array. The stimulus consisted of photopic, spatially uniform, temporally broadband flicker. From the recorded spike trains, an estimate was obtained of the stimulus intensity as a function of time. This was compared with the actual stimulus to assess the quality and quantity of visual information conveyed by the ganglion cell population. Two algorithms were used to decode the spike trains: an optimized linear filter in which each action potential made an additive contribution to the stimulus estimate and an artificial neural network trained by back-propagation to match spike trains with stimuli. The two methods performed indistinguishably, suggesting that most of the information about this stimulus can be extracted by linear operations on the spike trains. Individual ganglion cells conveyed information at a rate of 3.2 +/- 1.7 bits/s (mean +/- SD), with an average information content per spike of 1.6 bits. The maximal possible rate of information transmission compatible with the measured spiking statistics was 13.9 +/- 6.3 bits/s. On average, ganglion cells used 22% of this capacity to encode visual information. When a decoder received two spike trains of the same response type, the reconstruction improved only marginally over that obtained from a single cell. However, a decoder using an ON and an OFF cell extracted as much information as the sum of that obtained from each cell alone.Thus cells of opposite response type encode different and nonoverlapping features of the stimulus. As more spike trains were provided to the decoder, the total information rate rapidly saturated, with 79% of the maximal value obtained from a local cluster of just four neurons of different functional types. The decoding filter applied to a given neuron's spikes within such a multiunit decoder differed substantially from the filter applied to that same neuron in a single-unit decoder. This shows that the optimal interpretation of a ganglion cell's action potential depends strongly on the simultaneous activity of other nearby cells. The quality of the stimulus reconstruction varied greatly with frequency: flicker components below 1 Hz and above 10 Hz were reconstructed poorly, and the performance was optimal near 2.5 Hz. Further analysis suggests that temporal encoding by ganglion cell spike trains is limited by slow phototransduction in the cone photoreceptors and a corrupting noise source proximal to the cones.
Encoding of visual information by LGN bursts. Thalamic relay cells respond to visual stimuli either in burst mode, as a result of activation of a low-threshold Ca2+ conductance, or in tonic mode, when this conductance is inactive. We investigated the role of these two response modes for the encoding of the time course of dynamic visual stimuli, based on extracellular recordings of 35 relay cells from the lateral geniculate nucleus of anesthetized cats. We presented a spatially optimized visual stimulus whose contrast fluctuated randomly in time with frequencies of up to 32 Hz. We estimated the visual information in the neural responses using a linear stimulus reconstruction method. Both burst and tonic spikes carried information about stimulus contrast, exceeding one bit per action potential for the highest variance stimuli. The "meaning" of an action potential, i.e., the optimal estimate of the stimulus at times preceding a spike, was similar for burst and tonic spikes. In within-trial comparisons, tonic spikes carried about twice as much information per action potential as bursts, but bursts as unitary events encoded about three times more information per event than tonic spikes. The coding efficiency of a neuron for a particular stimulus is defined as the fraction of the neural coding capacity that carries stimulus information. Based on a lower bound estimate of coding efficiency, bursts had approximately 1.5-fold higher efficiency than tonic spikes, or 3-fold if bursts were considered unitary events. Our main conclusion is that both bursts and tonic spikes encode stimulus information efficiently, which rules out the hypothesis that bursts are nonvisual responses.
Early stages of visual processing may exploit the characteristic structure of natural visual stimuli. This structure may differ from the intrinsic structure of natural scenes, because sampling of the environment is an active process. For example, humans move their eyes several times a second when looking at a scene. The portions of a scene that fall on the fovea are sampled at high spatial resolution, and receive a disproportionate fraction of cortical processing. We recorded the eye positions of human subjects while they viewed images of natural scenes. We report that active selection affected the statistics of the stimuli encountered by the fovea, and also by the parafovea up to eccentricities of 4 degrees. We found two related effects. First, subjects looked at image regions that had high spatial contrast. Second, in these regions, the intensities of nearby image points (pixels) were less correlated with each other than in images selected at random. These effects could serve to increase the information available to the visual system for further processing. We show that both of these effects can be simply obtained by constructing an artificial ensemble comprised of the highest-contrast regions of images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.