The origin of orientation selectivity in visual cortical responses is a central problem for understanding cerebral cortical circuitry. In cats, many experiments suggest that orientation selectivity arises from the arrangement of lateral geniculate nucleus (LGN) afferents to layer 4 simple cells. However, this explanation is not sufficient to account for the contrast invariance of orientation tuning. To understand contrast invariance, we first characterize the input to cat simple cells generated by the oriented arrangement of LGN afferents. We demonstrate that it has two components: a spatial-phase-specific component (i.e., one that depends on receptive field spatial phase), which is tuned for orientation, and a phase-nonspecific component, which is untuned. Both components grow with contrast. Second, we show that a correlation-based intracortical circuit, in which connectivity between cell pairs is determined by the correlation of their LGN inputs, is sufficient to achieve well tuned, contrast-invariant orientation tuning. This circuit generates both spatially opponent, "antiphase" inhibition ("push-pull"), and spatially matched, "same-phase" excitation. The inhibition, if sufficiently strong, suppresses the untuned input component and sharpens responses to the tuned component at all contrasts. The excitation amplifies tuned responses. This circuit agrees with experimental evidence showing spatial opponency between, and similar orientation tuning of, the excitatory and inhibitory inputs received by a simple cell. Orientation tuning is primarily input driven, accounting for the observed invariance of tuning width after removal of intracortical synaptic input, as well as for the dependence of orientation tuning on stimulus spatial frequency. The model differs from previous push-pull models in requiring dominant rather than balanced inhibition and in predicting that a population of layer 4 inhibitory neurons should respond in a contrast-dependent manner to stimuli of all orientations, although their tuning width may be similar to that of excitatory neurons. The model demonstrates that fundamental response properties of cortical layer 4 can be explained by circuitry expected to develop under correlation-based rules of synaptic plasticity, and shows how such circuitry allows the cortex to distinguish stimulus intensity from stimulus form.
Many phenomenological models of the responses of simple cells in primary visual cortex have concluded that a cell's firing rate should be given by its input raised to a power greater than one. This is known as an expansive power-law nonlinearity. However, intracellular recordings have shown that a different nonlinearity, a linear-threshold function, appears to give a good prediction of firing rate from a cell's low-pass-filtered voltage response. Using a model based on a linear-threshold function, Anderson et al. showed that voltage noise was critical to converting voltage responses with contrast-invariant orientation tuning into spiking responses with contrast-invariant tuning. We present two separate results clarifying the connection between noise-smoothed linear-threshold functions and power-law nonlinearities. First, we prove analytically that a power-law nonlinearity is the only input-output function that converts contrast-invariant input tuning into contrast-invariant spike tuning. Second, we examine simulations of a simple model that assumes instantaneous spike rate is given by a linear-threshold function of voltage and voltage responses include significant noise. We show that the resulting average spike rate is well described by an expansive power law of the average voltage (averaged over multiple trials), provided that average voltage remains less than about 1.5 SDs of the noise above threshold. Finally, we use this model to show that the noise levels recorded by Anderson et al. are consistent with the degree to which the orientation tuning of spiking responses is more sharply tuned relative to the orientation tuning of voltage responses. Thus neuronal noise can robustly generate power-law input-output functions of the form frequently postulated for simple cells.
To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory 1/square root of N and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, 1/square root of N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
Adult zebra finch songs consist of stereotyped sequences of syllables. Although some behavioral and physiological data suggest that songs are structured hierarchically, there is also evidence that they are driven by nonhierarchical, clock-like bursting in the premotor nucleus HVC (used as a proper name). In this study, we developed a semiautomated template-matching algorithm to identify repeated sequences of syllables and a modified dynamic time-warping algorithm to make fine-grained measurements of the temporal structure of song. We find that changes in song length are expressed across the song as a whole rather than resulting from an accumulation of independent variance during singing. Song length changes systematically over the course of a day and is related to the general level of bird activity as well as the presence of a female. The data also show patterns of variability that suggest distinct mechanisms underlying syllable and gap lengths: as tempo varies, syllables stretch and compress proportionally less than gaps, whereas syllable-syllable and gap-gap correlations are significantly stronger than syllable-gap correlations. There is also increased temporal variability at motif boundaries and especially strong positive correlations between the same syllables sung in different motifs. Finally, we find evidence that syllable onsets may have a special role in aligning syllables with global song structure. Generally, the timing data support a hierarchical view in which song is composed of smaller syllable-based units and provide a rich set of constraints for interpreting the results of physiological recordings.
Birdsong learning provides an ideal model system for studying temporally complex motor behavior. Guided by the well-characterized functional anatomy of the song system, we have constructed a computational model of the sensorimotor phase of song learning. Our model uses simple Hebbian and reinforcement learning rules and demonstrates the plausibility of a detailed set of hypotheses concerning sensory-motor interactions during song learning. The model focuses on the motor nuclei HVc and robust nucleus of the archistriatum (RA) of zebra finches and incorporates the long-standing hypothesis that a series of song nuclei, the Anterior Forebrain Pathway (AFP), plays an important role in comparing the bird's own vocalizations with a previously memorized song, or “template.” This “AFP comparison hypothesis” is challenged by the significant delay that would be experienced by presumptive auditory feedback signals processed in the AFP. We propose that the AFP does not directly evaluate auditory feedback, but instead, receives an internally generated prediction of the feedback signal corresponding to each vocal gesture, or song “syllable.” This prediction, or “efference copy,” is learned in HVc by associating premotor activity in RA-projecting HVc neurons with the resulting auditory feedback registered within AFP-projecting HVc neurons. We also demonstrate how negative feedback “adaptation” can be used to separate sensory and motor signals within HVc. The model predicts that motor signals recorded in the AFP during singing carry sensory information and that the primary role for auditory feedback during song learning is to maintain an accurate efference copy. The simplicity of the model suggests that associational efference copy learning may be a common strategy for overcoming feedback delay during sensorimotor learning.
Phase resetting curves (PRCs) provide a measure of the sensitivity of oscillators to perturbations. In a noisy environment, these curves are themselves very noisy. Using perturbation theory, we compute the mean and the variance for PRCs for arbitrary limit cycle oscillators when the noise is small. Phase resetting curves and phase dependent variance are fit to experimental data and the variance is computed using an ad-hoc method. The theoretical curves of this phase dependent method match both simulations and experimental data significantly better than an ad-hoc method. A dual cell network simulation is compared to predictions using the analytical phase dependent variance estimation presented in this paper. We also discuss how entrainment of a neuron to a periodic pulse depends on the noise amplitude.
We develop a new analysis of the lateral geniculate nucleus (LGN) input to a cortical simple cell, demonstrating that this input is the sum of two terms, a linear term and a nonlinear term. In response to a drifting grating, the linear term represents the temporal modulation of input, and the nonlinear term represents the mean input. The nonlinear term, which grows with stimulus contrast, has been neglected in many previous models of simple cell response. We then analyze two scenarios by which contrast-invariance of orientation tuning may arise. In the first scenario, at larger contrasts, the nonlinear part of the LGN input, in combination with strong push-pull inhibition, counteracts the nonlinear effects of cortical spike threshold, giving the result that orientation tuning scales with contrast. In the second scenario, at low contrasts, the nonlinear component of LGN input is negligible, and noise smooths the nonlinearity of spike threshold so that the input-output function approximates a power-law function. These scenarios can be combined to yield contrast-invariant tuning over the full range of stimulus contrast. The model clarifies the contribution of LGN nonlinearities to the orientation tuning of simple cells and demonstrates how these nonlinearities may impact different models of contrast-invariant tuning.
Motor variability often reflects a mixture of different neural and peripheral sources operating over a range of timescales. We present a statistical model of sequence timing that can be used to measure three distinct components of timing variability: global tempo changes that are spread across the sequence, such as might stem from neuromodulatory sources with widespread influence; fast, uncorrelated timing noise, stemming from noisy components within the neural system; and timing jitter that does not alter the timing of subsequent elements, such as might be caused by variation in the motor periphery or by measurement error. In addition to quantifying the variability contributed by each of these latent factors in the data, the approach assigns maximum likelihood estimates of each factor on a trial-to-trial basis. We applied the model to adult zebra finch song, a temporally complex behavior with rich structure on multiple timescales. We find that individual song vocalizations (syllables) contain roughly equal amounts of variability in each of the three components while overall song length is dominated by global tempo changes. Across our sample of syllables, both global and independent variability scale with average length while timing jitter does not, a pattern consistent with the Wing and Kristofferson (1973) model of sequence timing. We also find significant day-to-day drift in all three timing sources, but a circadian pattern in tempo only. In tests using artificially generated data, the model successfully separates out the different components with small error. The approach provides a general framework for extracting distinct sources of timing variability within action sequences, and can be applied to neural and behavioral data from a wide array of systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.