BackgroundOne central question in the context of motor control and action monitoring is at what point in time errors can be detected. Previous electrophysiological studies investigating this issue focused on brain potentials elicited after erroneous responses, mainly in simple speeded response tasks. In the present study, we investigated brain potentials before the commission of errors in a natural and complex situation.Methodology/Principal FindingsExpert pianists bimanually played scales and patterns while the electroencephalogram (EEG) was recorded. Event-related potentials (ERPs) were computed for correct and incorrect performances. Results revealed differences already 100 ms prior to the onset of a note (i.e., prior to auditory feedback). We further observed that erroneous keystrokes were delayed in time and pressed more slowly.ConclusionsOur data reveal neural mechanisms in musicians that are able to detect errors prior to the execution of erroneous movements. The underlying mechanism probably relies on predictive control processes that compare the predicted outcome of an action with the action goal.
Musicians are highly trained motor experts with pronounced associations between musical actions and the corresponding auditory effects. However, the importance of auditory feedback for music performance is controversial, and it is unknown how feedback during music performance is processed. The present study investigated the neural mechanisms underlying the processing of auditory feedback manipulations in pianists. To disentangle effects of action-based and perception-based expectations, we compared feedback manipulations during performance to the mere perception of the same stimulus material. In two experiments, pianists performed bimanually sequences on a piano, while at random positions, the auditory feedback of single notes was manipulated, thereby creating a mismatch between an expected and actually perceived action effect (action condition). In addition, pianists listened to tone sequences containing the same manipulations (perception condition). The manipulations in the perception condition were either task-relevant (Experiment 1) or task-irrelevant (Experiment 2). In action and perception conditions, event-related potentials elicited by manipulated tones showed an early fronto-central negativity around 200 msec, presumably reflecting a feedback ERN/N200, followed by a positive deflection (P3a). The early negativity was more pronounced during the action compared to the perception condition. This shows that during performance, the intention to produce specific auditory effects leads to stronger expectancies than the expectancies built up during music perception.
To analyze how emotions and imagery are shared, processed and recognized in Guided Imagery and Music, we measured the brain activity of an experienced therapist (“Guide”) and client (“Traveler”) with dual-EEG in a real therapy session about potential death of family members. Synchronously with the EEG, the session was video-taped and then micro-analyzed. Four raters identified therapeutically important moments of interest (MOI) and no-interest (MONI) which were transcribed and annotated. Several indices of emotion- and imagery-related processing were analyzed: frontal and parietal alpha asymmetry, frontal midline theta, and occipital alpha activity. Session ratings showed overlaps across all raters, confirming the importance of these MOIs, which showed different cortical activity in visual areas compared to resting-state. MOI 1 was a pivotal moment including an important imagery with a message of hope from a close family member, while in the second MOI the Traveler sent a message to an unborn baby. Generally, results seemed to indicate that the emotions of Traveler and Guide during important moments were not positive, pleasurably or relaxed when compared to resting-state, confirming both were dealing with negative emotions and anxiety that had to be contained in the interpersonal process. However, the temporal dynamics of emotion-related markers suggested shifts in emotional valence and intensity during these important, personally meaningful moments; for example, during receiving the message of hope, an increase of frontal alpha asymmetry was observed, reflecting increased positive emotional processing. EEG source localization during the message suggested a peak activation in left middle temporal gyrus. Interestingly, peaks in emotional markers in the Guide partly paralleled the Traveler's peaks; for example, during the Guide's strong feeling of mutuality in MOI 2, the time series of frontal alpha asymmetries showed a significant cross-correlation, indicating similar emotional processing in Traveler and Guide. Investigating the moment-to-moment interaction in music therapy showed how asymmetry peaks align with the situated cognition of Traveler and Guide along the emotional contour of the music, representing the highs and lows during the therapy process. Combining dual-EEG with detailed audiovisual and qualitative data seems to be a promising approach for further research into music therapy.
Abstract■ The present study investigated the effects of auditory selective attention on the processing of syntactic information in music and speech using event-related potentials. Spoken sentences or musical chord sequences were either presented in isolation, or simultaneously. When presented simultaneously, participants had to focus their attention either on speech, or on music. Final words of sentences and final harmonies of chord sequences were syntactically either correct or incorrect. Irregular chords elicited an early right anterior negativity (ERAN), whose amplitude was decreased when music was simultaneously presented with speech, compared to when only music was presented. However, the amplitude of the ERAN-like waveform elicited when music was ignored did not differ from the conditions in which participants attended the chord sequences. Irregular sentences elicited an early left anterior negativity (ELAN), regardless of whether speech was presented in isolation, was attended, or was to be ignored. These findings suggest that the neural mechanisms underlying the processing of syntactic structure of music and speech operate partially automatically, and, in the case of music, are influenced by different attentional conditions. Moreover, the ERAN was slightly reduced when irregular sentences were presented, but only when music was ignored. Therefore, these findings provide no clear support for an interaction of neural resources for syntactic processing already at these early stages. ■
To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.
This article describes a setup for the simultaneous recording of electrophysiological data (EEG), musical data (MIDI), and three-dimensional movement data. Previously, each of these three different kinds of measurements, conducted sequentially, has been proven to provide important information about different aspects of music performance as an example of a demanding multisensory motor skill. With the method described here, it is possible to record brain-related activity and movement data simultaneously, with accurate timing resolution and at relatively low costs. EEG and MIDI data were synchronized with a modified version of the FTAP software, sending synchronization signals to the EEG recording device simultaneously with keypress events. Similarly, a motion capture system sent synchronization signals simultaneously with each recorded frame. The setup can be used for studies investigating cognitive and motor processes during music performance and music-like tasks--for example, in the domains of motor control, learning, music therapy, or musical emotions. Thus, this setup offers a promising possibility of a more behaviorally driven analysis of brain activity.
Previous studies examining EEG and LORETA in patients with chronic pain discovered an overactivation of high theta (6–9 Hz) and low beta (12–16 Hz) power in central regions. MEG studies with healthy subjects correlating evoked nociception ratings and source localization described delta and gamma changes according to two music interventions. Using similar music conditions with chronic pain patients, we examined EEG in response to two different music interventions for pain. To study this process in-depth we conducted a mixed-methods case study approach, based on three clinical cases. Effectiveness of personalized music therapy improvisations (entrainment music – EM) versus preferred music on chronic pain was examined with 16 participants. Three patients were randomly selected for follow-up EEG sessions three months post-intervention, where they listened to recordings of the music from the interventions provided during the research. To test the difference of EM versus preferred music, recordings were presented in a block design: silence, their own composed EM (depicting both “pain” and “healing”), preferred (commercially available) music, and a non-participant’s EM as a control. Participants rated their pain before and after the EEG on a 1–10 scale. We conducted a detailed single case analysis to compare all conditions, as well as a group comparison of entrainment-healing condition versus preferred music condition. Power spectrum and according LORETA distributions focused on expected changes in delta, theta, beta, and gamma frequencies, particularly in sensory-motor and central regions. Intentional moment-by-moment attention on the sounds/music rather than on pain and decreased awareness of pain was experienced from one participant. Corresponding EEG analysis showed accompanying power changes in sensory-motor regions and LORETA projection pointed to insula-related changes during entrainment-pain music. LORETA also indicated involvement of visual-spatial, motor, and language/music improvisation processing in response to his personalized EM which may reflect active recollection of creating the EM. Group-wide analysis showed common brain responses to personalized entrainment-healing music in theta and low beta range in right pre- and post-central gyrus. We observed somatosensory changes consistent with processing pain during entrainment-healing music that were not seen during preferred music. These results may depict top–down neural processes associated with active coping for pain.
Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.