The ability to vocalize is ubiquitous in vertebrates, but neural networks underlying vocal control remain poorly understood. Here, we performed simultaneous neuronal recordings in the frontal cortex and dorsal striatum (caudate nucleus, CN) during the production of echolocation pulses and communication calls in bats. This approach allowed us to assess the general aspects underlying vocal production in mammals and the unique evolutionary adaptations of bat echolocation. Our data indicate that before vocalization, a distinctive change in highgamma and beta oscillations (50-80 Hz and 12-30 Hz, respectively) takes place in the bat frontal cortex and dorsal striatum. Such precise fine-tuning of neural oscillations could allow animals to selectively activate motor programs required for the production of either echolocation or communication vocalizations. Moreover, the functional coupling between frontal and striatal areas, occurring in the theta oscillatory band (4-8 Hz), differs markedly at the millisecond level, depending on whether the animals are in a navigational mode (that is, emitting echolocation pulses) or in a social communication mode (emitting communication calls). Overall, this study indicates that fronto-striatal oscillations could provide a neural correlate for vocal control in bats.
Experimental evidence supports that cortical oscillations represent multiscale temporal modulations existent in natural stimuli, yet little is known about the processing of these multiple timescales at a neuronal level. Here, using extracellular recordings from the auditory cortex (AC) of awake bats (Carollia perspicillata), we show the existence of three neuronal types which represent different levels of the temporal structure of conspecific vocalizations, and therefore constitute direct evidence of multiscale temporal processing of naturalistic stimuli by neurons in the AC. These neuronal subpopulations synchronize differently to local-field potentials, particularly in theta- and high frequency bands, and are informative to a different degree in terms of their spike rate. Interestingly, we also observed that both low and high frequency cortical oscillations can be highly informative about the listened calls. Our results suggest that multiscale neuronal processing allows for the precise and non-redundant representation of natural vocalizations in the AC.
Most mammals rely on the extraction of acoustic information from the environment in order to survive. However, the mechanisms that support sound representation in auditory neural networks involving sensory and association brain areas remain underexplored. In this study, we address the functional connectivity between an auditory region in frontal cortex (the frontal auditory field, FAF) and the auditory cortex (AC) in the bat Carollia perspicillata. The AC is a classic sensory area central for the processing of acoustic information. On the other hand, the FAF belongs to the frontal lobe, a brain region involved in the integration of sensory inputs, modulation of cognitive states, and in the coordination of behavioral outputs. The FAF-AC network was examined in terms of oscillatory coherence (local-field potentials, LFPs), and within an information theoretical framework linking FAF and AC spiking activity. We show that in the absence of acoustic stimulation, simultaneously recorded LFPs from FAF and AC are coherent in low frequencies (1-12 Hz). This "default" coupling was strongest in deep AC layers and was unaltered by acoustic stimulation. However, presenting auditory stimuli did trigger the emergence of coherent auditory-evoked gamma-band activity (>25 Hz) between the FAF and AC. In terms of spiking, our results suggest that FAF and AC engage in distinct coding strategies for representing artificial and natural sounds. Taken together, our findings shed light onto the neuronal coding strategies and functional coupling mechanisms that enable sound representation at the network level in the mammalian brain.
Sound discrimination is essential in many species for communicating and foraging. Bats, for example, use sounds for echolocation and communication. In the bat auditory cortex there are neurons that process both sound categories, but how these neurons respond to acoustic transitions, that is, echolocation streams followed by a communication sound, remains unknown. Here, we show that the acoustic context, a leading sound sequence followed by a target sound, changes neuronal discriminability of echolocation versus communication calls in the cortex of awake bats of both sexes. Nonselective neurons that fire equally well to both echolocation and communication calls in the absence of context become category selective when leading context is present. On the contrary, neurons that prefer communication sounds in the absence of context turn into nonselective ones when context is added. The presence of context leads to an overall response suppression, but the strength of this suppression is stimulus specific. Suppression is strongest when context and target sounds belong to the same category, e.g., echolocation followed by echolocation. A neuron model of stimulus-specific adaptation replicated our results in silico. The model predicts selectivity to communication and echolocation sounds in the inputs arriving to the auditory cortex, as well as two forms of adaptation, presynaptic frequency-specific adaptation acting in cortical inputs and stimulus-unspecific postsynaptic adaptation. In addition, the model predicted that context effects can last up to 1.5 s after context offset and that synaptic inputs tuned to lowfrequency sounds (communication signals) have the shortest decay constant of presynaptic adaptation.
This article presents a characterization of cortical responses to artificial and natural temporally patterned sounds in the bat species Carollia perspicillata, a species that produces vocalizations at rates above 50 Hz. Multi-unit activity was recorded in three different experiments. In the first experiment, amplitude-modulated (AM) pure tones were used as stimuli to drive auditory cortex (AC) units. AC units of both ketamine-anesthetized and awake bats could lock their spikes to every cycle of the stimulus modulation envelope, but only if the modulation frequency was below 22 Hz. In the second experiment, two identical communication syllables were presented at variable intervals. Suppressed responses to the lagging syllable were observed, unless the second syllable followed the first one with a delay of at least 80 ms (i.e., 12.5 Hz repetition rate). In the third experiment, natural distress vocalization sequences were used as stimuli to drive AC units. Distress sequences produced by C. perspicillata contain bouts of syllables repeated at intervals of ~60 ms (16 Hz). Within each bout, syllables are repeated at intervals as short as 14 ms (~71 Hz). Cortical units could follow the slow temporal modulation flow produced by the occurrence of multisyllabic bouts, but not the fast acoustic flow created by rapid syllable repetition within the bouts. Taken together, our results indicate that even in fast vocalizing animals, such as bats, cortical neurons can only track the temporal structure of acoustic streams modulated at frequencies lower than 22 Hz.
communication sounds are ubiquitous in the animal kingdom, where they play a role in advertising physiological states and/or socio-contextual scenarios. Human screams, for example, are typically uttered in fearful contexts and they have a distinctive feature termed as "roughness", which depicts amplitude fluctuations at rates from 30-150 Hz. In this article, we report that the occurrence of fast acoustic periodicities in harsh sounding vocalizations is not unique to humans. A roughness-like structure is also present in vocalizations emitted by bats (species Carollia perspicillata) in distressful contexts. We report that 47.7% of distress calls produced by bats carry amplitude fluctuations at rates ~1.7 kHz (>10 times faster than temporal modulations found in human screams). In bats, rough-like vocalizations entrain brain potentials and are more effective in accelerating the bats' heart rate than slow amplitude modulated sounds. our results are consistent with a putative role of fast amplitude modulations (roughness in humans) for grabbing the listeners attention in situations in which the emitter is in distressful, potentially dangerous, contexts.
Processing of ethologically relevant stimuli could be interfered by non-relevant stimuli. Animals have behavioral adaptations to reduce signal interference. It is largely unexplored whether the behavioral adaptations facilitate neuronal processing of relevant stimuli. Here, we characterize behavioral adaptations in the presence of biotic noise in the echolocating bat Carollia perspicillata and we show that the behavioral adaptations could facilitate neuronal processing of biosonar information. According to the echolocation behavior, bats need to extract their own signals in the presence of vocalizations from conspecifics. With playback experiments, we demonstrate that C. perspicillata increases the sensory acquisition rate by emitting groups of echolocation calls when flying in noisy environments. Our neurophysiological results from the auditory midbrain and cortex show that the high sensory acquisition rate does not vastly increase neuronal suppression and that the response to an echolocation sequence is partially preserved in the presence of biosonar signals from conspecifics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.