Speech processing relies on interactions between auditory and motor systems and is asymmetrically organized in the human brain. The left auditory system is specialized for processing of phonemes, whereas the right is specialized for processing of pitch changes in speech affecting prosody. In speakers of tonal languages, however, processing of pitch (i.e., tone) changes that alter word meaning is left-lateralized indicating that linguistic function and language experience shape speech processing asymmetries. Here, we investigated the asymmetry of motor contributions to auditory speech processing in male and female speakers of tonal and non-tonal languages. We temporarily disrupted the right or left speech motor cortex using transcranial magnetic stimulation (TMS) and measured the impact of these disruptions on auditory discrimination (mismatch negativity; MMN) responses to phoneme and tone changes in sequences of syllables using electroencephalography (EEG). We found that the effect of motor disruptions on processing of tone changes differed between language groups: disruption of the right speech motor cortex suppressed responses to tone changes in non-tonal language speakers, whereas disruption of the left speech motor cortex suppressed responses to tone changes in tonal language speakers. In non-tonal language speakers, the effects of disruption of left speech motor cortex on responses to tone changes were inconclusive. For phoneme changes, disruption of left but not right speech motor cortex suppressed responses in both language groups. We conclude that the contributions of the right and left speech motor cortex to auditory speech processing are determined by the functional roles of acoustic cues in the listener's native language. SIGNIFICANCE STATEMENT The principles underlying hemispheric asymmetries of auditory speech processing remain debated. The asymmetry of processing of speech sounds is affected by low-level acoustic cues, but also by their linguistic function. By combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG), we investigated the asymmetry of motor contributions to auditory speech processing in tonal and non-tonal language speakers. We provide causal evidence that the functional role of the acoustic cues in the listener's native language affects the asymmetry of motor influences on auditory speech discrimination ability [indexed by mismatch negativity (MMN) responses]. Lateralized top-down motor influences can affect asymmetry of speech processing in the auditory system.
When auditory feedback perturbation is introduced in a predictable way over a number of utterances, speakers learn to compensate by adjusting their own productions, a process known as sensorimotor adaptation. Despite multiple lines of evidence indicating the role of primary motor cortex (M1) in motor learning and memory, whether M1 causally contributes to sensorimotor adaptation in the speech domain remains unclear. Here, we aimed to assay whether temporary disruption of the articulatory representation in left M1 by repetitive transcranial magnetic stimulation (rTMS) impairs speech adaptation. To induce sensorimotor adaptation, the frequencies of first formants (F1) were shifted up and played back to participants when they produced “head”, “bed”, and “dead” repeatedly (the learning phase). A low-frequency rTMS train (.6 Hz, subthreshold, 12 min) over either the tongue or the hand representation of M1 (between-subjects design) was applied before participants experienced altered auditory feedback in the learning phase. We found that the group who received rTMS over the hand representation showed the expected compensatory response for the upwards shift in F1 by significantly reducing F1 and increasing the second formant (F2) frequencies in their productions. In contrast, these expected compensatory changes in both F1 and F2 did not occur in the group that received rTMS over the tongue representation. Critically, rTMS (subthreshold) over the tongue representation did not affect vowel production, which was unchanged from baseline. These results provide direct evidence that the articulatory representation in left M1 causally contributes to sensorimotor learning in speech. Furthermore, these results also suggest that M1 is critical to the network supporting a more global adaptation that aims to move the altered speech production closer to a learnt pattern of speech production used to produce another vowel.
Speech processing relies on interactions between auditory and motor systems and is asymmetrically organized in the human brain. The left auditory system is specialized for processing of phonemes, whereas the right is specialized for processing of pitch changes in speech that affect prosody. In speakers of tonal languages, however, processing of pitch (i.e., tone) changes that alter word meaning is left-lateralized. This indicates that linguistic function and language experience shape auditory speech processing asymmetries; their effect on auditory-motor speech processing remains unknown, however. Here, we investigated the asymmetry of motor contributions to auditory speech processing in speakers of tonal and non-tonal languages. We temporarily disrupted the left or right speech motor cortex using repetitive transcranial magnetic stimulation (rTMS) and measured the impact of these disruptions on auditory processing of phoneme and tone changes in sequences of syllables using electroencephalography (EEG). We found that disruption of the speech motor cortex in the left, but not the right hemisphere, impaired processing of phoneme changes in both language groups equally. In contrast, the effect of motor disruptions on processing of tone changes differed in tonal and non-tonal language groups: disruption of the left speech motor cortex significantly impaired processing of tone changes in tonal language speakers, whereas disruption of the right speech motor cortex modulated processing of tone changes in non-tonal speakers. We conclude that the contribution of the left and right speech motor cortex to auditory speech processing is determined by the functional role of the acoustic cues in the listener's native language. Significance StatementThe principles underlying hemispheric asymmetries of auditory speech processing remain debated. The asymmetry of auditory speech processing is affected by the low-level acoustic cues, but also by their linguistic function. By combining TMS and EEG, we investigated the asymmetry of motor contributions to auditory speech processing in tonal and non-tonal language speakers. For the first time, we provide causal evidence that auditory-motor speech processing asymmetries are shaped by the functional role of the acoustic cues in the listener's native language. The lateralised top-down motor influences are likely to affect asymmetry of speech processing in the auditory system.
Previous studies of tonal speech perception have generally suggested harder or later access to lexical tone than segmental information, but the mechanism underlying the lexical tone disadvantage is unclear. Using a speeded discrimination paradigm free of context information, we confirmed multiple lines of evidence for the lexical tone disadvantage as well as revealed a distinctive advantage of word and atonal syllable judgments over phoneme and lexical tone judgments. The results led us to propose a Reverse Accessing Model (RAM) for tonal speech perception. The RAM is an extension of the influential TRACE model, with two additional processing levels specialized for tonal speech: lexical tone and atonal syllable. Critically, information accessing is assumed to be in reverse order of information processing, and only information at the syllable level and up is maintained active for immediate use. We tested and confirmed the predictions of the RAM on discrimination of each type of phonological component under different stimulus conditions. The current results have thus demonstrated the capability of the RAM as a general framework for tonal speech perception to provide a united account for empirical observations as well as to generate testable predictions.
When individuals make a movement that produces an unexpected outcome, they learn from the resulting error. This process, essential in both acquiring new motor skills and adapting to changing environments, critically relies on error sensitivity, which governs how much behavioral change results from a given error. Although behavioral and computational evidence suggests error sensitivity can change in response to task demands, neural evidence regarding the flexibility of error sensitivity in the human brain is lacking. Critically, the sensitivity of the nervous system to auditory errors during speech production, a complex and well-practiced motor behavior, has been extensively studied by examining the prediction-driven suppression of auditory cortical activity. Here, we tested whether the nervous system's sensitivity to errors, as measured by this suppression, can be modulated by altering speakers' perceived variability. Our results showed that error sensitivity was increased after exposure to an auditory perturbation that increased participants' perceived variability, consistent with predictions generated from previous behavioral data and state-space modeling. Conversely, we observed no significant changes in error sensitivity when perceived variability was unaltered or artificially reduced. The current study establishes the validity of behaviorally modulating the nervous system's sensitivity to errors. As sensitivity to sensory errors plays a critical role in sensorimotor adaptation, modifying error sensitivity has the potential to enhance motor learning and rehabilitation in speech and, potentially, more broadly across motor domains.
Although movement variability is often attributed to unwanted noise in the motor system, recent work has demonstrated that variability may be actively controlled. To date, research on regulation of motor variability has relied on relatively simple, laboratory-specific reaching tasks. It is not clear how these results translate to complex, well-practiced tasks. Here, we test how variability is regulated during speech production, a complex, highly over-practiced and natural motor behavior that relies on auditory and somatosensory feedback. Specifically, in a series of four experiments, we assessed the effects of auditory feedback manipulations that modulate perceived speech variability, shifting every production either towards (inward-pushing) or away from (outward-pushing) the center of the distribution for each vowel. Participants exposed to the inward-pushing perturbation (Experiment 1) increased produced variability while the perturbation was applied as well as after it was removed. Unexpectedly, the outward-pushing perturbation (Experiment 2) also increased produced variability during exposure, but variability returned to near-baseline levels when the perturbation was removed. Outward-pushing perturbations failed to reduce participants' produced variability both with larger perturbation magnitude (Experiment 3) or after their variability had increased above baseline levels as a result of the inward-pushing perturbation (Experiment 4). Simulations of the applied perturbations using a state-space model of motor behavior suggest that the increases in produced variability in response to the two types of perturbations may arise through distinct mechanisms. Together, these results suggest that motor variability is actively monitored and can be modulated even in complex and well-practiced behaviors, such as speech.
Dopamine is known to modulate sensory plasticity in animal brain, but how it impacts perceptual learning in humans remains largely unknown. In a placebo-controlled, double-blinded training experiment with young healthy adults (both male and female), oral administration of Madopar, a dopamine precursor, during each of multiple training sessions was shown to enhance auditory perceptual learning, particularly in late training sessions. Madopar also enhanced learning and transfer to working memory when tested outside the time widow of drug effect, which appeared to retain for at least 20 days. To test whether such learning modulation was mediated by the dopaminergic working memory network, the same dopamine manipulation was applied to working memory training, but to little influence on learning or transfer. Further, a neural network model of auditory perceptual learning revealed distinctive behavioural modulation patterns for proposed dopaminergic functions in the auditory cortex: trial-by-trial reinforcement signals (reward/reward prediction error and expected reward) and across-session memory consolidation. Only the memory consolidation simulations matched experimental observations. The results thus demonstrate that dopamine modulates human perceptual learning, mostly likely via enhancing memory consolidation over extended time scales.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.