Congenital amusia is a neurodevelopmental disorder of musical processing that also impacts subtle aspects of speech processing. It remains debated at what stage(s) of auditory processing deficits in amusia arise. In this study, we investigated whether amusia originates from impaired subcortical encoding of speech (in quiet and noise) and musical sounds in the brainstem. Fourteen Cantonese-speaking amusics and 14 matched controls passively listened to six Cantonese lexical tones in quiet, two Cantonese tones in noise (signal-to-noise ratios at 0 and 20 dB), and two cello tones in quiet while their frequency-following responses (FFRs) to these tones were recorded. All participants also completed a behavioral lexical tone identification task. The results indicated normal brainstem encoding of pitch in speech (in quiet and noise) and musical stimuli in amusics relative to controls, as measured by FFR pitch strength, pitch error, and stimulus-to-response correlation. There was also no group difference in neural conduction time or FFR amplitudes. Both groups demonstrated better FFRs to speech (in quiet and noise) than to musical stimuli. However, a significant group difference was observed for tone identification, with amusics showing significantly lower accuracy than controls. Analysis of the tone confusion matrices suggested that amusics were more likely than controls to confuse between tones that shared similar acoustic features. Interestingly, this deficit in lexical tone identification was not coupled with brainstem abnormality for either speech or musical stimuli. Together, our results suggest that the amusic brainstem is not functioning abnormally, although higher-order linguistic pitch processing is impaired in amusia. This finding has significant implications for theories of central auditory processing, requiring further investigations into how different stages of auditory processing interact in the human brain.
Musical experience and linguistic experience have been shown to facilitate language and music perception. However, the precise nature of music and language interaction is still a subject of ongoing research. In this study, using subcortical electrophysiological measures (frequency following response), we seek to understand the effect of interaction of linguistic pitch experience and musical pitch experience on subcortical lexical and musical pitch encoding. We compared musicians and non-musicians who were native speakers of a tone language on subcortical encoding of linguistic and musical pitch. We found that musicians and non-musicians did not differ on the brainstem encoding of lexical tones. However, musicians showed a more robust brainstem encoding of musical pitch as compared to non-musicians. These findings suggest that a combined musical and linguistic pitch experience affects auditory brainstem encoding of linguistic and musical pitch differentially. From our results, we could also speculate that native tone language speakers might use two different mechanisms, at least for the subcortical encoding of linguistic and musical pitch.
The current study revealed that the four subsections of STAP merged to form three distinct components. Dichotic CV and gap detection formed two independent components while speech perception in noise and auditory memory merged to form a single component. This indicates a possible relationship between auditory memory and speech perception in noise as suggested by Katz (1992). Thus, STAP is able to detect three different components related to auditory processing. The study also indicates that the number of children at risk for each of the different auditory processes vary. Ongoing evaluation will shed light on the usefulness of the subsections of STAP in identifying auditory processing problems. In addition to conducting the APD screening test, it is also recommended that a hearing screening be done to rule out peripheral hearing problems when hearing screening programs are not conducted in schools.
We investigated the development of early-latency and long-latency brain responses to native and non-native speech to shed light on the neurophysiological underpinnings of perceptual narrowing and early language development. Specifically, we postulated a two-level process to explain the decrease in sensitivity to non-native phonemes towards the end of infancy. Neurons at the earlier stages of the ascending auditory pathway mature rapidly during infancy facilitating the encoding of both native and non-native sounds. This growth enables neurons at the later stages of the auditory pathway to assign phonological status to speech according to the infant’s native language environment. To test this hypothesis, we collected early- latency and long-latency neural responses to native and non-native lexical tones from 85 Cantoneselearning children aged between 23 days and 24 months and 16 days. As expected, a broad range of presumably subcortical early-latency neural encoding measures grew rapidly and substantially during the first two years for both native and non-native tones. By contrast, longlatency cortical electrophysiological changes occurred on a much slower scale and showed sensitivity to nativeness at around six months. Our study provided a comprehensive understanding of early language development by revealing the complementary roles of earlier and later stages of speech processing in the developing brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.