Although the brain relies on auditory information to calibrate vocal behavior, the neural substrates of vocal learning remain unclear. Here we demonstrate that lesions of the dopaminergic inputs to a basal ganglia nucleus in a songbird species (Bengalese finches, Lonchura striata var. domestica) greatly reduced the magnitude of vocal learning driven by disruptive auditory feedback in a negative reinforcement task. These lesions produced no measureable effects on the quality of vocal performance or the amount of song produced. Our results suggest that dopaminergic inputs to the basal ganglia selectively mediate reinforcement-driven vocal plasticity. In contrast, dopaminergic lesions produced no measurable effects on the birds' ability to restore song acoustics to baseline following the cessation of reinforcement training, suggesting that different forms of vocal plasticity may use different neural mechanisms.
Dopamine is hypothesized to convey error information in reinforcement learning tasks with explicit appetitive or aversive cues. However, during motor skill learning feedback signals arise from an animal’s evaluation of sensory feedback resulting from its own behavior, rather than any external reward or punishment. It has previously been shown that intact dopaminergic signaling from the ventral tegmental area/substantia nigra pars compacta (VTA/SNc) complex is necessary for vocal learning when songbirds modify their vocalizations to avoid hearing distorted auditory feedback (playbacks of white noise). However, it remains unclear whether dopaminergic signaling underlies vocal learning in response to more naturalistic errors (pitch-shifted feedback delivered via headphones). We used male Bengalese finches ( Lonchura striata var. domestica ) to test the hypothesis that the necessity of dopamine signaling is shared between the two types of learning. We combined 6-hydroxydopamine (6-OHDA) lesions of dopaminergic terminals within Area X, a basal ganglia nucleus critical for song learning, with a headphones learning paradigm that shifted the pitch of auditory feedback and compared their learning to that of unlesioned controls. We found that 6-OHDA lesions affected song behavior in two ways. First, over a period of days lesioned birds systematically lowered their pitch regardless of the presence or absence of auditory errors. Second, 6-OHDA lesioned birds also displayed severe deficits in sensorimotor learning in response to pitch-shifted feedback. Our results suggest roles for dopamine in both motor production and auditory error processing, and a shared mechanism underlying vocal learning in response to both distorted and pitch-shifted auditory feedback.
Generalization, the brain's ability to transfer motor learning from one context to another, occurs in a wide range of complex behaviors. However, the rules of generalization in vocal behavior are poorly understood, and it is unknown how vocal learning generalizes across an animal's entire repertoire of natural vocalizations and sequences. Here, we asked whether generalization occurs in a nonhuman vocal learner and quantified its properties. We hypothesized that adaptive error correction of a vocal gesture produced in one sequence would generalize to the same gesture produced in other sequences. To test our hypothesis, we manipulated the fundamental frequency (pitch) of auditory feedback in Bengalese finches (Lonchura striata var. domestica) to create sensory errors during vocal gestures (song syllables) produced in particular sequences. As hypothesized, error-corrective learning on pitch-shifted vocal gestures generalized to the same gestures produced in other sequential contexts. Surprisingly, generalization magnitude depended strongly on sequential distance from the pitch-shifted syllables, with greater adaptation for gestures produced near to the pitch-shifted syllable. A further unexpected result was that nonshifted syllables changed their pitch in the direction opposite from the shifted syllables. This apparently antiadaptive pattern of generalization could not be explained by correlations between generalization and the acoustic similarity to the pitch-shifted syllable. These findings therefore suggest that generalization depends on the type of vocal gesture and its sequential context relative to other gestures and may reflect an advantageous strategy for vocal learning and maintenance.
Experimental manipulations of sensory feedback during complex behavior have provided valuable insights into the computations underlying motor control and sensorimotor plasticity 1 . Consistent sensory perturbations result in compensatory changes in motor output, reflecting changes in feedforward motor control that reduce the experienced feedback error. By quantifying how different sensory feedback errors affect human behavior, prior studies have explored how visual signals are used to recalibrate arm movements 2,3 and auditory feedback is used to modify speech production [4][5][6][7] . The strength of this approach rests on the ability to mimic naturalistic errors in behavior, allowing the experimenter to observe how experienced errors in production are used to recalibrate motor output.Songbirds provide an excellent animal model for investigating the neural basis of sensorimotor control and plasticity 8,9 . The songbird brain provides a well-defined circuit in which the areas necessary for song learning are spatially separated from those required for song production, and neural recording and lesion studies have made significant advances in understanding how different brain areas contribute to vocal behavior [9][10][11][12] . However, the lack of a naturalistic error-correction paradigm -in which a known acoustic parameter is perturbed by the experimenter and then corrected by the songbird -has made it difficult to understand the computations underlying vocal learning or how different elements of the neural circuit contribute to the correction of vocal errors 13 .The technique described here gives the experimenter precise control over auditory feedback errors in singing birds, allowing the introduction of arbitrary sensory errors that can be used to drive vocal learning. Online sound-processing equipment is used to introduce a known perturbation to the acoustics of song, and a miniaturized headphones apparatus is used to replace a songbird's natural auditory feedback with the perturbed signal in real time. We have used this paradigm to perturb the fundamental frequency (pitch) of auditory feedback in adult songbirds, providing the first demonstration that adult birds maintain vocal performance using error correction 14 . The present protocol can be used to implement a wide range of sensory feedback perturbations (including but not limited to pitch shifts) to investigate the computational and neurophysiological basis of vocal learning. Video LinkThe video component of this article can be found at http://www.jove.com/video/50027/ ProtocolImplementing the headphones system consists of four major steps. Section 1 below details the assembly of the headphones frame, which houses the electronics (speakers and a miniaturized microphone). Section 2 describes how the frame is attached to the bird. Section 3 describes the assembly of the electronics. Section 4 explains how the electronics are connected to sound-processing and data-collection hardware and details a procedure for testing that the system is functioning...
While functional connectivity has typically been calculated over the entire length of the scan (5-10 min), interest has been growing in dynamic analysis methods that can detect changes in connectivity on the order of cognitive processes (seconds). Previous work with sliding window correlation has shown that changes in functional connectivity can be observed on these time scales in the awake human and in anesthetized animals. This exciting advance creates a need for improved approaches to characterize dynamic functional networks in the brain. Previous studies were performed using sliding window analysis on regions of interest defined based on anatomy or obtained from traditional steady-state analysis methods. The parcellation of the brain may therefore be suboptimal, and the characteristics of the time-varying connectivity between regions are dependent upon the length of the sliding window chosen. This manuscript describes an algorithm based on wavelet decomposition that allows data-driven clustering of voxels into functional regions based on temporal and spectral properties. Previous work has shown that different networks have characteristic frequency fingerprints, and the use of wavelets ensures that both the frequency and the timing of the BOLD fluctuations are considered during the clustering process. The method was applied to resting state data acquired from anesthetized rats, and the resulting clusters agreed well with known anatomical areas. Clusters were highly reproducible across subjects. Wavelet cross-correlation values between clusters from a single scan were significantly higher than the values from randomly-matched clusters that shared no temporal information, indicating that wavelet-based analysis is sensitive to the relationship between areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.