The comparative study of music and language is drawing an increasing amount of research interest. Like language, music is a human universal involving perceptually discrete elements organized into hierarchically structured sequences. Music and language can thus serve as foils for each other in the study of brain mechanisms underlying complex sound processing, and comparative research can provide novel insights into the functional and neural architecture of both domains. This review focuses on syntax, using recent neuroimaging data and cognitive theory to propose a specific point of convergence between syntactic processing in language and music. This leads to testable predictions, including the prediction that that syntactic comprehension problems in Broca's aphasia are not selective to language but influence music perception as well.
Mounting evidence suggests that musical training benefits the neural encoding of speech. This paper offers a hypothesis specifying why such benefits occur. The “OPERA” hypothesis proposes that such benefits are driven by adaptive plasticity in speech-processing networks, and that this plasticity occurs when five conditions are met. These are: (1) Overlap: there is anatomical overlap in the brain networks that process an acoustic feature used in both music and speech (e.g., waveform periodicity, amplitude envelope), (2) Precision: music places higher demands on these shared networks than does speech, in terms of the precision of processing, (3) Emotion: the musical activities that engage this network elicit strong positive emotion, (4) Repetition: the musical activities that engage this network are frequently repeated, and (5) Attention: the musical activities that engage this network are associated with focused attention. According to the OPERA hypothesis, when these conditions are met neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. Yet since speech shares these networks with music, speech processing benefits. The OPERA hypothesis is used to account for the observed superior subcortical encoding of speech in musically trained individuals, and to suggest mechanisms by which musical training might improve linguistic reading abilities.
In order to test the language-speci city of a known neural correlate of syntactic processing [the P600 event-related brain potential (ERP) component], this study directly compared ERPs elicited by syntactic incongruities in language and music. Using principles of phrase structure for language and principles of harmony and key-relatedness for music, sequences were constructed in which an element was either congruous, moderately incongruous, or highly incongruous with the preceding structural context. A within-subjects design using 15 musically educated adults revealed that linguistic and musical structural incongruities elicited positivities that were statistically indistinguishable in a speci ed latency range. In contrast, a music-speci c ERP component was observed that showed antero-temporal right-hemisphere lateralization. The results argue against the language-speci city of the P600 and suggest that language and music can be studied in parallel to address questions of neural speci city in cognitive processing.
Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.
Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
The tendency to move in rhythmic synchrony with a musical beat (e.g., via head bobbing, foot tapping, or dance) is a human universal [1] yet is not commonly observed in other species [2]. Does this ability reflect a brain specialization for music cognition, or does it build on neural circuitry that ordinarily serves other functions? According to the "vocal learning and rhythmic synchronization" hypothesis [3], entrainment to a musical beat relies on the neural circuitry for complex vocal learning, an ability that requires a tight link between auditory and motor circuits in the brain [4, 5]. This hypothesis predicts that only vocal learning species (such as humans and some birds, cetaceans, and pinnipeds, but not nonhuman primates) are capable of synchronizing movements to a musical beat. Here we report experimental evidence for synchronization to a beat in a sulphur-crested cockatoo (Cacatua galerita eleonora). By manipulating the tempo of a musical excerpt across a wide range, we show that the animal spontaneously adjusts the tempo of its rhythmic movements to stay synchronized with the beat. These findings indicate that synchronization to a musical beat is not uniquely human and suggest that animal models can provide insights into the neurobiology and evolution of human music [6].
The great majority of the world's music is metrical, i.e., has periodic structure at multiple time scales. Does the metrical structure of a non-isochronous rhythm improve synchronization with a beat compared to synchronization with an isochronous sequence at the beat period? Beat synchronization is usually associated with auditory stimuli, but are people able to extract a beat from rhythmic visual sequences with metrical structure? We addressed these questions by presenting listeners with rhythmic patterns which were either isochronous or non-isochronous in either the auditory or visual modality, and by asking them to tap to the beat, which was prescribed to occur at 800-ms intervals. For auditory patterns, we found that a strongly metrical structure did not improve overall accuracy of synchronization compared with isochronous patterns of the same beat period, though it did influence the higher-level patterning of taps. Synchronization was impaired in weakly metrical patterns in which some beats were silent. For the visual patterns, we found that participants were generally unable to synchronize to metrical non-isochronous rhythms, or to rapid isochronous rhythms. This suggests that beat perception and synchronization have a special affinity with the auditory system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.