Robust perception of self-motion requires integration of visual motion signals with nonvisual cues. Neurons in the dorsal subdivision of the medial superior temporal area (MSTd) may be involved in this sensory integration, because they respond selectively to global patterns of optic flow, as well as translational motion in darkness. Using a virtual-reality system, we have characterized the three-dimensional (3D) tuning of MSTd neurons to heading directions defined by optic flow alone, inertial motion alone, and congruent combinations of the two cues. Among 255 MSTd neurons, 98% exhibited significant 3D heading tuning in response to optic flow, whereas 64% were selective for heading defined by inertial motion. Heading preferences for visual and inertial motion could be aligned but were just as frequently opposite. Moreover, heading selectivity in response to congruent visual/vestibular stimulation was typically weaker than that obtained using optic flow alone, and heading preferences under congruent stimulation were dominated by the visual input. Thus, MSTd neurons generally did not integrate visual and nonvisual cues to achieve better heading selectivity. A simple two-layer neural network, which received eye-centered visual inputs and head-centered vestibular inputs, reproduced the major features of the MSTd data. The network was trained to compute heading in a head-centered reference frame under all stimulus conditions, such that it performed a selective reference-frame transformation of visual, but not vestibular, signals. The similarity between network hidden units and MSTd neurons suggests that MSTd may be an early stage of sensory convergence involved in transforming optic flow information into a (head-centered) reference frame that facilitates integration with vestibular signals.
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits.DOI: http://dx.doi.org/10.7554/eLife.08206.001
Some neurons in auditory cortex respond to recent stimulus history by adapting their response functions to track stimulus statistics directly, as might be expected. In contrast, some neurons respond to loud sounds by adjusting their response functions away from high intensities and consequently remain sensitive to softer sounds. In marmoset monkey auditory cortex, the latter type of adaptation appears to exist only in neurons tuned to stimulus intensity.
In utero experience, such as maternal speech in humans, can shape later perception, although the underlying cortical substrate is unknown. In adult mammals, ascending thalamocortical projections target layer 4, and the onset of sensory responses in the cortex is thought to be dependent on the onset of thalamocortical transmission to layer 4 as well as the ear and eye opening. In developing animals, thalamic fibers do not target layer 4 but instead target subplate neurons deep in the developing white matter. We investigated if subplate neurons respond to sensory stimuli. Using electrophysiological recordings in young ferrets, we show that auditory cortex neurons respond to sound at very young ages, even before the opening of the ears. Single unit recordings showed that auditory responses emerged first in cortical subplate neurons. Subsequently, responses appeared in the future thalamocortical input layer 4, and sound-evoked spike latencies were longer in layer 4 than in subplate, consistent with the known relay of thalamic information to layer 4 by subplate neurons. Electrode array recordings show that early auditory responses demonstrate a nascent topographic organization, suggesting that topographic maps emerge before the onset of spiking responses in layer 4. Together our results show that sound-evoked activity and topographic organization of the cortex emerge earlier and in a different layer than previously thought. Thus, early sound experience can activate and potentially sculpt subplate circuits before permanent thalamocortical circuits to layer 4 are present, and disruption of this early sensory activity could be utilized for early diagnosis of developmental disorders.
The responses of auditory neurons tuned to stimulus intensity (i.e., nonmonotonic rate-level responders) have typically been analyzed with stimulus paradigms that eliminate neuronal adaptation to recent stimulus statistics. This procedure is usually accomplished by presenting individual sounds with long silent periods between them. Studies using such paradigms have led to hypotheses that nonmonotonic neurons may play a role in amplitude spectrum coding or level-invariant representations of complex spectral shapes. We have previously proposed an alternate hypothesis that level-tuned neurons may represent specialized coders of low sound levels because they preserve their sensitivity to low levels even when average sound level is relatively high. Here we demonstrate that nonmonotonic neurons in awake marmoset primary auditory cortex accomplish this feat by adapting their upper dynamic range to encode sounds with high mean level, leaving the lower dynamic range available for encoding relatively rare low-level sounds. This adaptive behavior manifests in nonmonotonic relative to monotonic neurons as 1) a lesser amount of overall shifting of rate-level response thresholds and (2) a nonmonotonic gain adjustment with increasing mean stimulus level.
Vocalizations are widely used for communication between animals. Mice use a large repertoire of ultrasonic vocalizations (USVs) in different social contexts. During social interaction recognizing the partner's sex is important, however, previous research remained inconclusive whether individual USVs contain this information. Using deep neural networks (DNNs) to classify the sex of the emitting mouse from the spectrogram we obtain unprecedented performance (77%, vs. SVM: 56%, Regression: 51%). Performance was even higher (85%) if the DNN could also use each mouse's individual properties during training, which may, however, be of limited practical value. Splitting estimation into two DNNs and using 24 extracted features per USV, spectrogram-to-features and features-to-sex (60%) failed to reach single-step performance. Extending the features by each USVs spectral line, frequency and time marginal in a semi-convolutional DNN resulted in a performance midway (64%). Analyzing the network structure suggests an increase in sparsity of activation and correlation with sex, specifically in the fully-connected layers. A detailed analysis of the USV structure, reveals a subset of male vocalizations characterized by a few acoustic features, while the majority of sex differences appear to rely on a complex combination of many features. The same network architecture was also able to achieve above-chance classification for cortexless mice, which were considered indistinguishable before. In summary, spectrotemporal differences between male and female USVs allow at least their partial classification, which enables sexual recognition between mice and automated attribution of USVs during analysis of social interactions.
Investigations of auditory neuronal firing rate as a function of sound level have revealed a wide variety of rate-level function shapes, including neurons with nonmonotonic or level-tuned functions. These neurons have an unclear role in auditory processing but have been found to be quite common. In the present study of awake marmoset primary auditory cortex (A1), 56% (305 out of 544 neurons), when stimulated with tones at the highest sound level tested, exhibited a decrement in driven rate of at least 50% from the maximum driven rate. These nonmonotonic neurons demonstrated significantly lower response thresholds than monotonic neurons, although both populations exhibited thresholds skewed toward lower values. Nonmonotonic neurons significantly outnumbered monotonic neurons in the frequency range 6-13 kHz, which is the frequency range containing most marmoset vocalization energy. Spontaneous rate was inversely correlated with threshold in both populations, and spontaneous rates of nonmonotonic neurons had significantly lower values than spontaneous rates of monotonic neurons, although distributions of maximum driven rates were not significantly different. Finally, monotonicity was found to be organized within electrode penetrations like characteristic frequency but with less structure. These findings are consistent with the hypothesis that nonmonotonic neurons play a unique role in representing sound level, particularly at the lowest sound levels and for complex vocalizations. Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Research Highlights •Neurons tuned to sound intensity or level are the most common neurons in primary auditory cortex• These neurons are the most sensitive neuronal class to low-intensity sounds • These neurons are most common at frequencies where most vocalization energy resides• Neuronal response patterns to sounds at different intensities is organized into cortical columns NIH Public Access
Neuronal responses and topographic organization of feature selectivity in the cerebral cortex are shaped by ascending inputs and by intracortical connectivity. The mammalian primary auditory cortex has a tonotopic arrangement at large spatial scales (greater than 300 microns). This large-scale architecture breaks down in supragranular layers at smaller scales (around 300 microns), where nearby frequency and sound level tuning properties can be quite heterogeneous. Since layer 4 has a more homogeneous architecture, the heterogeneity in supragranular layers might be caused by heterogeneous ascending input or via heterogeneous intralaminar connections. Here we measure the functional 2-dimensional spatial connectivity pattern of the supragranular auditory cortex on micro-column scales. In general connection probability decreases with radial distance from each neuron, but the decrease is steeper in the isofrequency axis leading to an anisotropic distribution of connection probability with respect to the tonotopic axis. In addition to this radial decrease in connection probability we find a patchy organization of inhibitory and excitatory synaptic inputs that is also anisotropic with respect to the tonotopic axis. These periodicities are at spatial scales of ~100 and ~300 μm. While these spatial periodicities show anisotropy in auditory cortex, they are isotropic in visual cortex, indicating region specific differences in intralaminar connections. Together our results show that layer 2/3 neurons in auditory cortex show specific spatial intralaminar connectivity despite the overtly heterogeneous tuning properties.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.