With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers' abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper's influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g., happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N = 319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0-10 scale. The same instruments from the original study (i.e., violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalized Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.
The spontaneous motor tempo (SMT) describes the pace of regular and repeated movements such as hand clapping or walking. It is typically measured by letting people tap with their index finger at a pace that feels most natural and comfortable to them. A number of factors have been suggested to influence the SMT, such as age, time of the day, arousal, and potentially musical experience. This study aimed at investigating the effects of these factors in a combined and out-of-the-lab context by implementing the finger-tapping paradigm in an online experiment using a self-developed web application. Due to statistical multimodality in the distribution of participants' SMT (N = 3,576), showing peaks at modes of around 250 ms, a Gaussian mixture model was applied that grouped participants into six clusters, ranging from Very Fast (M = 265 ms, SD = 74) to Very Slow (M = 1,757 ms, SD = 166). These SMT clusters differed in terms of age, suggesting that older participants had a slower SMT, and time of the day, showing that the earlier it was, the slower participants' SMT. While arousal did not differ between the SMT clusters, more aroused participants showed faster SMTs across all normalized SMT clusters. Effects of musical experience were inconclusive. With a large international sample, these results provide insights into factors influencing the SMT irrespective of cultural background, which can be seen as a window into human timing processes.
Music information retrieval (MIR) is a fast-growing research area. One of its aims is to extract musical characteristics from audio. In this study, we assumed the roles of researchers without further technical MIR experience and set out to test in an exploratory way its opportunities and challenges in the specific context of musical emotion perception. Twenty sound engineers rated 60 musical excerpts from a broad range of styles with respect to 22 spectral, musical, and cross-modal features (perceptual features) and perceived emotional expression. In addition, we extracted 86 features (acoustic features) of the excerpts with the MIRtoolbox (Lartillot & Toiviainen, 2007). First, we evaluated the perceptual and extracted acoustic features. Both perceptual and acoustic features posed statistical challenges (e.g., perceptual features were often bimodally distributed, and acoustic features highly correlated). Second, we tested the suitability of the acoustic features for modeling perceived emotional content. Four nearly disjunctive feature sets provided similar results, implying a certain arbitrariness of feature selection. We compared the predictive power of perceptual and acoustic features using linear mixed effects models, but the results were inconclusive. We discuss critical points and make suggestions to further evaluate MIR tools for modeling music perception and processing.
In a widely cited study, Levitin (1994) suggested the existence of absolute pitch memory for music in the general population beyond the rare trait of genuine absolute pitch (AP). In his sample, a significant proportion of non-AP possessors were able to reproduce absolute pitch levels when asked to sing very familiar pop songs from memory. Forty-four percent of participants sang the correct pitch on at least one of two trials, and 12% were correct on both trials. However, until now, no replication of this study has ever been published. The current paper presents the results of a large replication endeavour across six different labs in Germany and the UK. All labs used the same methodology, carefully replicating Levitin's original experiment. In each lab, between 40 and 50 participants were tested (N = 277). Participants were asked to sing two different pop songs of their choice. All sung productions were compared to the original songs. Twenty-five percent of the participants sang the exact pitch of at least one of the two chosen songs Article at UNIV OF CONNECTICUT on April 11, 2015 msx.sagepub.com Downloaded from and 4% hit the right pitches for both songs. Our results generally confirm the findings of Levitin (1994). However, the results differ considerably across laboratories, and the estimated overall effect using metaanalysis techniques was significantly smaller than Levitin's original result. This illustrates the variability of empirical findings derived from small sample sizes and corroborates the need for replication and metaanalytical studies in music psychology in general.
In this article we show that a subgroup of music experts has a reliable and consistent notion of melodic similarity, and that this notion can be measured with satisfactory precision. Our measurements enable us to model the similarity ratings of music experts by automated and algorithmic means. A large number of algorithmic similarity measure found in the literature were mathematically systematised and implemented. The best similarity algorithms compared to human experts were chosen and optimised by statistical means according to different contexts. A multidimensional scaling model of the algorithmic similarity measures is constructed to give an overiew over the different musical dimensions reflected by these measures. We show some examples where this optimised methods could be successfully applied to real world problems like folk song categorisation and analysis, and discuss further applications and implications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.