The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants (N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).
209). Raw data (i.e., recording of memory responses and their transcriptions) are not made publicly available due to maintaining confidentiality and privacy of the participants. These data may be made available to researchers upon request. Aggregate data (i.e., memory coding using the Autobiographical Interview and Linguistic Inquiry and Word Count) are made available in the following OSF repository link (private link for peer review only, will be made public upon acceptance of the paper): https://osf.io/2ykx5/?view_only=ad724c1492d34428a083817fe70ce34c Analysis and experimental presentation code will be shared with researchers upon request. The experiment reported in this manuscript was not preregistered.
Observers can make independent aesthetic judgments of at least two images presented briefly and simultaneously. However, it is unknown whether this is the case for two stimuli of different sensory modalities. Here, we investigated whether individuals can judge auditory and visual stimuli independently, and whether stimulus duration influences such judgments. Participants (N = 120, across two experiments and a replication) saw images of paintings and heard excerpts of music, presented simultaneously for 2 s (Experiment 1) or 5 s (Experiment 2). After the stimuli were presented, participants rated how much pleasure they felt from the stimulus (music, image, or combined pleasure of both, depending on which was cued) on a 9-point scale. Finally, participants completed a baseline rating block where they rated each stimulus in isolation. We used the baseline ratings to predict ratings of audiovisual presentations. Across both experiments, the root mean square errors (RMSEs) obtained from leave-one-out-cross-validation analyses showed that people’s ratings of music and images were unbiased by the simultaneously presented other stimulus, and ratings of both were best described as the arithmetic mean of the ratings from the individual presentations at the end of the experiment. This pattern of results replicates previous findings on simultaneously presented images, indicating that participants can ignore the pleasure of an irrelevant stimulus regardless of the sensory modality and duration of stimulus presentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.