The current study investigated the joint contribution of visual and auditory disfluencies, or distortions, to actual and predicted memory performance with naturalistic, multi-modal materials through three experiments. In Experiments 1 and 2, participants watched food recipe clips containing visual and auditory information that were either fully intact or else distorted in one or both of the two modalities. They were asked to remember these for a later memory test and made memory predictions after each clip. Participants produced lower memory predictions for distorted auditory and visual information than intact ones. However, these perceptual distortions revealed no actual memory differences across encoding conditions, expanding the metacognitive illusion of perceptual disfluency for static, single-word materials to naturalistic, dynamic, multi-modal materials. Experiment 3 provided naïve participants with a hypothetical scenario about the experimental paradigm used in Experiment 1, revealing lower memory predictions for distorted than intact information in both modalities. Theoretically, these results imply that both in-the-moment experiences and a priori beliefs may contribute to the perceptual disfluency illusion. From an applied perspective, the study suggests that when audio-visual distortions occur, individuals might use this information to predict their memory performance, even when it does not factor into actual memory performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.