*THIS PAPER HAS NOT YET BEEN PEER REVIEWED* Listening to music is a strategy many people use to regulate their emotions, especially sadness. However, there is disagreement about whether listening to music is a healthy way to regulate emotions, with some research finding that sad music worsens a sad state, especially for people high in rumination. To further explore the immediate consequences of music listening when sad 128 young adults (41% male, aged 18 to 25 years) were induced into a sad emotional state prior to random assignment to listening of either self-selected music, experimenter-selected sad music, or no music. Results revealed that listening to either self-selectedor experimenter-selected music led to a decrease in sadness. No difference was found between groups at post-listening. However, participants who listened to self-selected music reported a return to baseline levels of sadness, while this did not occur for participants who listened to experimenter-selected or were in the no music control. Rumination was also measured but did not moderate the impact of music listening on sadness for either musiccondition. Furthermore, there was no impact of rumination on participants’ perceptions of sadness in music. These results support the notion that listening to sad music does not worsen a sad state—even for those high in rumination—although it does appear to slow the emotion regulation process in cases where sad music is not self-selected.
The human face is a key source of social information. In particular, it communicates a target’s personal identity and some of their main group memberships. Different models of social perception posit distinct stages at which this group-level and person-level information is extracted from the face, with divergent downstream consequences for cognition and behavior. This paper presents four experiments that explore the time-course of extracting group and person information from faces. In Experiments 1 and 2, we explore the effect of chunked versus unchunked processing on the speed of extracting group versus person information, as well as the impact of familiarity in Experiment 2. In Experiment 3, we examine the effect of the availability of a diagnostic cue on these same judgments. In Experiment 4, we explore the effect of both group-level and person-level prototypicality of face exemplars. Across all four experiments, we find no evidence for the perceptual primacy of either group or person information. Instead, we find that chunked processing, featural processing based on a single diagnostic cue, familiarity, and the prototypicality of face exemplars all result in a processing speed advantage for both group-level and person-level judgments equivalently. These results have important implications for influential models of face processing and impression formation, and can inform — and be integrated with — an understanding of the process of social categorization more broadly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.