Recent studies have shown brain differences between professional musicians and non-musicians with respect to size, asymmetry or gray matter density of specific cerebral regions. Here we demonstrate: (1) that anatomical differences in the motor cortex can already be detected by coarse visual inspection; and (2) that within musicians, even a discrimination of instruments with different manual dominance is possible on a gross anatomical scale. Multiple raters, blinded for subject identity and hemisphere, investigated within-musician differences in the Omega Sign (OS), an anatomical landmark of the precentral gyrus associated with hand movement representation. The sample of 64 brains comprised matched groups of 16 expert string-players, 16 expert pianists and 32 non-musicians. Ratings were analysed by means of kappa statistics. Intra- and interobserver reliabilities were high. Musicians had a more pronounced OS expression than non-musicians, with keyboard-players showing a left and string-players a right hemisphere advantage. This suggests a differential brain adaptation depending on instrument played.
Background: Performing music requires fast auditory and motor processing. Regarding professional musicians, recent brain imaging studies have demonstrated that auditory stimulation produces a co-activation of motor areas, whereas silent tapping of musical phrases evokes a coactivation in auditory regions. Whether this is obtained via a specific cerebral relay station is unclear. Furthermore, the time course of plasticity has not yet been addressed.
This functional magnetic resonance imaging study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words and pitch patterns. Univariate and multivariate analyses were performed to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for rhythmic differences in both. Therefore, six conditions, arranged in a subtractive hierarchy were created: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; and as a control the pure musical or speech rhythm. Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in prosodic pitch processing, an opposite lateralization was found for pitch in song. The IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the IPS and the lateralized activity of the IFG.
MUSIC ELICITS PROFOUND EMOTIONS; HOWEVER, THE time-course of these emotional responses during listening sessions is unclear. We investigated the length of time required for participants to initiate emotional responses ("integration time") to 138 musical samples from a variety of genres by monitoring their real-time continuous ratings of emotional content and arousal level of the musical excerpts (made using a joystick). On average, participants required 8.31 s (SEM = 0.10) of music before initiating emotional judgments. Additionally, we found that: 1) integration time depended on familiarity of songs; 2) soul/funk, jazz, and classical genres were more quickly assessed than other genres; and 3) musicians did not differ significantly in their responses from those with minimal instrumental musical experience. Results were partially explained by the tempo of musical stimuli and suggest that decisions regarding musical structure, as well as prior knowledge and musical preference, are involved in the emotional response to music.
Nonmotor symptoms in Parkinson's disease (PD) involving cognition and emotionality have progressively received attention. The objective of the present study was to investigate recognition of emotional prosody in patients with PD (n = 14) in comparison to healthy control subjects (HC, n = 14). Event-related brain potentials (ERP) were recorded in a modified oddball paradigm under passive listening and active target detection instructions. Results showed a poorer performance of PD patients in classifying emotional prosody. ERP generated by emotional deviants (happy/sad) during passive listening revealed diminished amplitudes of the mismatch-related negativity for sad deviants, indicating an impairment of early preattentive processing of emotional prosody in PD.
Humans vary substantially in their ability to learn new motor skills. Here, we examined inter-individual differences in learning to play the piano, with the goal of identifying relations to structural properties of white matter fiber tracts relevant to audio-motor learning. Non-musicians (n = 18) learned to perform three short melodies on a piano keyboard in a pure audio-motor training condition (vision of their own fingers was occluded). Initial learning times ranged from 17 to 120 min (mean ± SD: 62 ± 29 min). Diffusion-weighted magnetic resonance imaging was used to derive the fractional anisotropy (FA), an index of white matter microstructural arrangement. A correlation analysis revealed that higher FA values were associated with faster learning of piano melodies. These effects were observed in the bilateral corticospinal tracts, bundles of axons relevant for the execution of voluntary movements, and the right superior longitudinal fasciculus, a tract important for audio-motor transformations. These results suggest that the speed with which novel complex audio-motor skills can be acquired may be determined by variability in structural properties of white matter fiber tracts connecting brain areas functionally relevant for audio-motor learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.