Abstract:In this paper, we provide a first-person outlook on the technical challenges and developments involved in the recording, analysis, archiving, and cloud-based interchange of multimodal string quartet performance data as part of a collaborative research project on ensemble music making. In order to facilitate the sharing of our own collection of multimodal recordings and extracted descriptors and annotations, we developed a hosting platform and data archival protocol through which multimodal data (audio, video, … Show more
“…The present study demonstrates the potential importance of multimodal aspects from the performance space itself and proposes introducing other sensory material in MER systems besides auditory stimuli; namely visuals, given that many popular music streaming services include visual material. Multimodal emotion-sensing using computer vision [129] is therefore promising for the future design of music emotion studies, with more multimodal data exchange platforms and web applications merging to produce enriched music performance resources [130], [131].…”
Section: Connecting Emergent Perceptual Themes and Mirmentioning
Current music emotion recognition (MER) systems rely on emotion data averaged across listeners and over time to infer the emotion expressed by a musical piece, often neglecting time-and listener-dependent factors. These limitations can restrict the efficacy of MER systems and cause misjudgements. We present two exploratory studies on music emotion perception. First, in a live music concert setting, fifteen audience members annotated perceived emotion in the valence-arousal space over time using a mobile application. Analyses of inter-rater reliability yielded widely varying levels of agreement in the perceived emotions. A follow-up lab-based study to uncover the reasons for such variability was conducted, where twenty-one participants annotated their perceived emotions whilst viewing and listening to a video recording of the original performance and offered open-ended explanations. Thematic analysis revealed salient features and interpretations that help describe the cognitive processes underlying music emotion perception. Some of the results confirm known findings of music perception and MER studies. Novel findings highlight the importance of less frequently discussed musical attributes, such as musical structure, performer expression, and stage setting, as perceived across audio and visual modalities. Musicians are found to attribute emotion change to musical harmony, structure, and performance technique more than non-musicians. We suggest that accounting for such listener-informed music features can benefit MER in helping to address variability in emotion perception by providing reasons for listener similarities and idiosyncrasies.
“…The present study demonstrates the potential importance of multimodal aspects from the performance space itself and proposes introducing other sensory material in MER systems besides auditory stimuli; namely visuals, given that many popular music streaming services include visual material. Multimodal emotion-sensing using computer vision [129] is therefore promising for the future design of music emotion studies, with more multimodal data exchange platforms and web applications merging to produce enriched music performance resources [130], [131].…”
Section: Connecting Emergent Perceptual Themes and Mirmentioning
Current music emotion recognition (MER) systems rely on emotion data averaged across listeners and over time to infer the emotion expressed by a musical piece, often neglecting time-and listener-dependent factors. These limitations can restrict the efficacy of MER systems and cause misjudgements. We present two exploratory studies on music emotion perception. First, in a live music concert setting, fifteen audience members annotated perceived emotion in the valence-arousal space over time using a mobile application. Analyses of inter-rater reliability yielded widely varying levels of agreement in the perceived emotions. A follow-up lab-based study to uncover the reasons for such variability was conducted, where twenty-one participants annotated their perceived emotions whilst viewing and listening to a video recording of the original performance and offered open-ended explanations. Thematic analysis revealed salient features and interpretations that help describe the cognitive processes underlying music emotion perception. Some of the results confirm known findings of music perception and MER studies. Novel findings highlight the importance of less frequently discussed musical attributes, such as musical structure, performer expression, and stage setting, as perceived across audio and visual modalities. Musicians are found to attribute emotion change to musical harmony, structure, and performance technique more than non-musicians. We suggest that accounting for such listener-informed music features can benefit MER in helping to address variability in emotion perception by providing reasons for listener similarities and idiosyncrasies.
“…Although these tools may help therapists understand their data and make clinical decisions, the technology must be sufficiently robust, fast, and easy to use for therapists to be able to beneficially integrate them into their daily workflow. Regarding the limitations of current MIR approaches, it is worth pointing out that the examination of synchronization between performers has been analyzed in a range of studies in music psychology and empirical musicology (Keller, 2014;Volpe et al, 2016), although the visualization of synchronization-which may be of value when analyzing therapy sessions-has only been approached recently (Maestre et al, 2017), and remains largely an open subject in MIR.…”
Section: Analysis/visualization Of Musical Structuresmentioning
The fields of music, health, and technology have seen significant interactions in recent years in developing music technology for health care and well-being. In an effort to strengthen the collaboration between the involved disciplines, the workshop “Music, Computing, and Health” was held to discuss best practices and state-of-the-art at the intersection of these areas with researchers from music psychology and neuroscience, music therapy, music information retrieval, music technology, medical technology (medtech), and robotics. Following the discussions at the workshop, this article provides an overview of the different methods of the involved disciplines and their potential contributions to developing music technology for health and well-being. Furthermore, the article summarizes the state of the art in music technology that can be applied in various health scenarios and provides a perspective on challenges and opportunities for developing music technology that (1) supports person-centered care and evidence-based treatments, and (2) contributes to developing standardized, large-scale research on music-based interventions in an interdisciplinary manner. The article provides a resource for those seeking to engage in interdisciplinary research using music-based computational methods to develop technology for health care, and aims to inspire future research directions by evaluating the state of the art with respect to the challenges facing each field.
“…The visual graphics technology discussed in this section is mainly used to study the physical mechanism of wind instruments. This method is of great significance for understanding the physical nature of musical instrument sounds, physical modeling of timbre synthesis, and guiding the design and production of musical instruments [13]. This section mainly introduces the basic principles and applications of two special optical images: Schlieren imaging and laser Doppler imaging.…”
Music visualization can present music information through the visual way of graphic images, which is helpful to improve the accuracy and effectiveness of music information communication. In view of the shortcomings in the current music visualization field, this paper combines K-means clustering, fusion decision tree and other mathematical statistical methods on the basis of music graphic images to construct a music visualization model based on graphic images and mathematical statistics. First, the application principles of Schlieren imaging and laser Doppler imaging in the visualization of music graphic images are described. Secondly, on the basis of music graphic images, K-means clustering method is used to perform cluster analysis on music visualization information. Finally, through the fusion decision tree method, the classification of music visual information is studied. The actual case analysis and performance test results show the superiority of the music visualization method based on graphic images and mathematical statistics. This method can provide a scientific reference model and basis for the modern music industry to establish new visualization systems using graphics, images and mathematical statistics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.