The "Mozart effect" refers to claims that people perform better on tests of spatial abilities after listening to music composed by Mozart. We examined whether the Mozart effect is a consequence of between-condition differences in arousal and mood. Participants completed a test of spatial abilities after listening to music or sitting in silence. The music was a Mozart sonata (a pleasant and energetic piece) for some participants and an Albinoni adagio (a slow, sad piece) for others. We also measured enjoyment, arousal, and mood. Performance on tbe spatial task was better following the music than the silence condition but only for participants who heard Mozart. The two music selections also induced differential responding on the enjoyment, arousal and mood measures. Moreover, when such differences were held constant by statistical means, the Mozart effect disappeared. These findings provide compelling evidence that the Mozart effect is an artifact of arousal and mood.
We examined effects of tempo and mode on spatial ability, arousal, and mood. A Mozart sonata was performed by a skilled pianist and recorded as a MIDI file. The file was edited to produce four versions that varied in tempo (fast or slow) and mode (major or minor). Participants listened to a single version and completed measures of spatial ability, arousal, and mood. Performance on the spatial task was superior after listening to music at a fast rather than a slow tempo, and when the music was presented in major rather than minor mode. Tempo manipulations affected arousal but not mood, whereas mode manipulations affected mood but not arousal. Changes in arousal and mood paralleled variation on the spatial task. The findings are consistent with the view that the "Mozart effect" is a consequence of changes in arousal and mood.
Studies of the link between music and emotion have primarily focused on listeners' sensitivity to emotion in the music of their own culture. This sensitivity may reflect listeners' enculturation to the conventions of their culture's tonal system. However, it may also reflect responses to psychophysical dimensions of sound that are independent of musical experience. A model of listeners' perception of emotion in music is proposed in which emotion in music is communicated through a combination of universal and cultural cues. Listeners may rely on either of these cues, or both, to arrive at an understanding of musically expressed emotion. The current study addressed the hypotheses derived from this model using a cross-cultural approach. The following questions were investigated: Can people identify the intended emotion in music from an unfamiliar tonal system? If they can, is their sensitivity to intended emotions associated with perceived changes in psychophysical dimensions of music? Thirty Western listeners rated the degree of joy, sadness, anger, and peace in 12 Hindustani raga excerpts (field recordings obtained in North India). In accordance with the raga-rasa system, each excerpt was intended to convey one of the four moods or "rasas" that corresponded to the four emotions rated by listeners. Listeners also provided ratings of four psychophysical variables: tempo, rhythmic complexity, melodic complexity, and pitch range. Listeners were sensitive to the intended emotion in ragas when that emotion was joy, sadness, or anger. Judgments of emotion were significantly related to judgments of psychophysical dimensions, and, in some cases, to instrument timbre. The findings suggest that listeners are sensitive to musically expressed emotion in an unfamiliar tonal system, and that this sensitivity is facilitated by psychophysical cues.
Three experiments revealed that music lessons promote sensitivity to emotions conveyed by speech prosody. After hearing semantically neutral utterances spoken with emotional (i.e., happy, sad, fearful, or angry) prosody, or tone sequences that mimicked the utterances' prosody, participants identified the emotion conveyed. In Experiment 1 (n ס 20), musically trained adults performed better than untrained adults. In Experiment 2 (n ס 56), musically trained adults outperformed untrained adults at identifying sadness, fear, or neutral emotion. In Experiment 3 (n ס 43), 6-year-olds were tested after being randomly assigned to 1 year of keyboard, vocal, drama, or no lessons. The keyboard group performed equivalently to the drama group and better than the no-lessons group at identifying anger or fear.
although people generally avoid negative emotional experiences in general, they often enjoy sadness portrayed in music and other arts. The present study investigated what kinds of subjective emotional experiences are induced in listeners by sad music, and whether the tendency to enjoy sad music is associated with particular personality traits. One hundred forty-eight participants listened to 16 music excerpts and rated their emotional responses. As expected, sadness was the most salient emotion experienced in response to sad excerpts. However, other more positive and complex emotions such as nostalgia, peacefulness, and wonder were also evident. Furthermore, two personality traits – Openness to Experience and Empathy – were associated with liking for sad music and with the intensity of emotional responses induced by sad music, suggesting that aesthetic appreciation and empathetic engagement play a role in the enjoyment of sad music.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.