We introduce a new neurofeedback approach, which allows users to manipulate expressive parameters in music performances using their emotional state, and we present the results of a pilot clinical experiment applying the approach to alleviate depression in elderly people. Ten adults (9 female and 1 male, mean = 84, SD = 5.8) with normal hearing participated in the neurofeedback study consisting of 10 sessions (2 sessions per week) of 15 min each. EEG data was acquired using the Emotiv EPOC EEG device. In all sessions, subjects were asked to sit in a comfortable chair facing two loudspeakers, to close their eyes, and to avoid moving during the experiment. Participants listened to music pieces preselected according to their music preferences, and were encouraged to increase the loudness and tempo of the pieces, based on their arousal and valence levels. The neurofeedback system was tuned so that increased arousal, computed as beta to alpha activity ratio in the frontal cortex corresponded to increased loudness, and increased valence, computed as relative frontal alpha activity in the right lobe compared to the left lobe, corresponded to increased tempo. Pre and post evaluation of six participants was performed using the BDI depression test, showing an average improvement of 17.2% (1.3) in their BDI scores at the end of the study. In addition, an analysis of the collected EEG data of the participants showed a significant decrease of relative alpha activity in their left frontal lobe (p = 0.00008), which may be interpreted as an improvement of their depression condition.
Music is known to have the power to induce strong emotions. The present study assessed, based on Electroencephalography (EEG) data, the emotional response of terminally ill cancer patients to a music therapy intervention in a randomized controlled trial. A sample of 40 participants from the palliative care unit in the Hospital del Mar in Barcelona was randomly assigned to two groups of 20. The first group [experimental group (EG)] participated in a session of music therapy (MT), and the second group [control group (CG)] was provided with company. Based on our previous work on EEG-based emotion detection, instantaneous emotional indicators in the form of a coordinate in the arousal-valence plane were extracted from the participants’ EEG data. The emotional indicators were analyzed in order to quantify (1) the overall emotional effect of MT on the patients compared to controls, and (2) the relative effect of the different MT techniques applied during each session. During each MT session, five conditions were considered: I (initial patient’s state before MT starts), C1 (passive listening), C2 (active listening), R (relaxation), and F (final patient’s state). EEG data analysis showed a significant increase in valence (p = 0.0004) and arousal (p = 0.003) between I and F in the EG. No significant changes were found in the CG. This results can be interpreted as a positive emotional effect of MT in advanced cancer patients. In addition, according to pre- and post-intervention questionnaire responses, participants in the EG also showed a significant decrease in tiredness, anxiety and breathing difficulties, as well as an increase in levels of well-being. No equivalent changes were observed in the CG.
Computational approaches for modeling expressive music performance have produced systems that emulate music expression, but few steps have been taken in the domain of ensemble performance. In this paper, we propose a novel method for building computational models of ensemble expressive performance and show how this method can be applied for deriving new insights about collaboration among musicians. In order to address the problem of interdependence among musicians we propose the introduction of inter-voice contextual attributes. We evaluate the method on data extracted from multi-modal recordings of string quartet performances in two different conditions: solo and ensemble. We used machine-learning algorithms to produce computational models for predicting intensity, timing deviations, vibrato extent, and bowing speed of each note. As a result, the introduced inter-voice contextual attributes generally improved the prediction of the expressive parameters. Furthermore, results on attribute selection show that the models trained on ensemble recordings took more advantage of inter-voice contextual attributes than those trained on solo recordings.
We train and evaluate two machine learning models for predicting fingering in violin performances using motion and EMG sensors integrated in the Myo device. Our aim is twofold: first, provide a fingering recognition model in the context of a gamification virtual violin application where we measure both right hand (i.e. bow) and left hand (i.e. fingering) gestures, and second, implement a tracking system for a computer assisted pedagogical tool for self-regulated learners in high-level music education. Our approach is based on the principle of mapping-by-demonstration in which the model is trained by the performer. We evaluated a model based on Decision Trees and compared it with a Hidden Markovian Model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.