The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers' ratings of the "goodness" of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships.leadership | joint action | music performance | body sway | Granger causality C oordinating actions with others in time and space-joint action-is essential for daily life. From opening a door for someone to conducting an orchestra, periods of attentional and physical synchrony are required to achieve a shared goal. Humans have been shaped by evolution to engage in a high level of social interaction, reflected in high perceptual sensitivity to communicative features in voices and faces, the ability to understand the thoughts and beliefs of others, sensitivity to joint attention, and the ability to coordinate goal-directed actions with others (1-3). The social importance of joint action is demonstrated in that simply moving in synchrony with another increases interpersonal affiliation, trust, and/or cooperative behavior in infants and adults (e.g., refs. 4-9). The temporal predictability of music provides an ideal framework for achieving such synchronous movement, and it has been hypothesized that musical behavior evolved and remains adaptive today because it promotes cooperative social interaction and joint action (10-12). Indeed music is used in important situations where the goal is for people to feel a social bond, such as at religious ceremonies, weddings, funerals, parties, sporting events, political rallies, and in the military...
A live music concert is a pleasurable social event that is among the most visceral and memorable forms of musical engagement. But what inspires listeners to attend concerts, sometimes at great expense, when they could listen to recordings at home? An iconic aspect of popular concerts is engaging with other audience members through moving to the music. Head movements, in particular, reflect emotion and have social consequences when experienced with others. Previous studies have explored the affiliative social engagement experienced among people moving together to music. But live concerts have other features that might also be important, such as that during a live performance the music unfolds in a unique and not predetermined way, potentially increasing anticipation and feelings of involvement for the audience. Being in the same space as the musicians might also be exciting. Here we controlled for simply being in an audience to examine whether factors inherent to live performance contribute to the concert experience. We used motion capture to compare head movement responses at a live album release concert featuring Canadian rock star Ian Fletcher Thornley, and at a concert without the performers where the same songs were played from the recorded album. We also examined effects of a prior connection with the performers by comparing fans and neutral-listeners, while controlling for familiarity with the songs, as the album had not yet been released. Head movements were faster during the live concert than the album-playback concert. Self-reported fans moved faster and exhibited greater levels of rhythmic entrainment than neutral-listeners. These results indicate that live music engages listeners to a greater extent than pre-recorded music and that a pre-existing admiration for the performers also leads to higher engagement.
The authors used a flicker paradigm for inducing change blindness as a more direct method of measuring attentional bias in problem drinkers in treatment than the previously used, modified Stroop, Posner, and dual-task paradigms. First, in an artificially constructed visual scene comprising digitized photographs of real alcohol-related and neutral objects, problem drinkers detected a change made to an alcohol-related object more quickly than to a neutral object. Age- and gender-matched social drinkers showed no such difference. Second, problem drinkers given the alcohol-related change to detect showed a negative correlation between the speed with which the change was detected and the problem severity as measured by the number of times previously treated. Coupled with other data from heavy and light social drinkers, the data support a graded continuity of attentional bias underpinning the length of the consumption continuum.
Joint action is essential in daily life, as humans often must coordinate with others to accomplish shared goals. Previous studies have mainly focused on sensorimotor aspects of joint action, with measurements reflecting event-to-event precision of interpersonal sensorimotor coordination (e.g., tapping). However, while emotional factors are often closely tied to joint actions, they are rarely studied, as event-to-event measurements are insufficient to capture higher-order aspects of joint action such as emotional expression. To quantify joint emotional expression, we used motion capture to simultaneously measure the body sway of each musician in a trio (piano, violin, cello) during performances. Excerpts were performed with or without emotional expression. Granger causality was used to analyze body sway movement time series amongst musicians, which reflects information flow. Results showed that the total Granger-coupling of body sway in the ensemble was higher when performing pieces with emotional expression than without. Granger-coupling further correlated with the emotional intensity as rated by both the ensemble members themselves and by musician judges, based on the audio recordings alone. Together, our findings suggest that Granger-coupling of co-actors’ body sways reflects joint emotional expression in a music ensemble, and thus provide a novel approach to studying joint emotional expression.
FACIAL EXPRESSIONS ARE USED IN MUSIC PERFORMANCE to communicate structural and emotional intentions. Exposure to emotional facial expressions also may lead to subtle facial movements that mirror those expressions. Seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing. Four different participants were recorded using facial electromyography (EMG) while performing the same task. Participants saw and heard recordings of musical phrases sung with happy, sad, and neutral emotional connotations. They then imitated the target stimulus, paying close attention to the emotion expressed. Facial expressions were monitored during four epochs: (a) during the target; (b) prior to their imitation; (c) during their imitation; and (d) after their imitation. Expressive activity was observed in all epochs, implicating a role of facial expressions in the perception, planning, production, and post-production of emotional singing.
Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature.Objective: To assess the presence of facial mimicry in patients with Parkinson's disease.Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions.Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] −0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI −0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = –0.45, p = 0.02, two-tailed).Conclusions: Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the “masked face” syndrome of PD.
It is commonly argued that music originated in human evolution as an adaptation to selective pressures. In this paper we present an alternative account in which music originated from a more general adaptation known as a Theory of Mind (ToM). ToM allows an individual to recognise the mental and emotional state of conspecifics, and is pivotal in the cultural transmission of knowledge. We propose that a specific form of ToM, Affective Engagement, provides the foundation for the emergence of music. Underpinned by the mirror neuron system of empathy and imitation, music achieves engagement by drawing from pre-existing functions across multiple modalities. As a multimodal phenomenon, music generates an emotional experience through the broadened activation of channels that are to be empathically matched by the audio-visual mirror neuron system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.