Visual information around us is rarely static. To perform a task in such a dynamic environment, we often have to compare current visual input with our working memory (WM) representation of the immediate past. However, little is known about what happens to a WM representation when it is compared with perceptual input. To test this, we asked young adults ( N = 170 total in three experiments) to compare a new visual input with a WM representation prior to reporting the WM representation. We found that the perceptual comparison biased the WM report, especially when the input was subjectively similar to the WM representation. Furthermore, using computational modeling and individual-differences analyses, we found that this similarity-induced memory bias was driven by representational integration, rather than incidental confusion, between the WM representation and subjectively similar input. Together, our findings highlight a novel source of WM distortion and suggest a general mechanism that determines how WM interacts with new visual input.
Visual information around us is rarely static. To carry out a task in such a dynamic environment, we often have to compare current visual input with our working memory representation of the immediate past. However, little is known about what happens to a working memory (WM) representation when it is compared with perceptual input. Here, we tested university students and found that perceptual comparisons retroactively bias working memory representations toward subjectively-similar perceptual inputs. Furthermore, using computational modeling and individual differences analyses, we found that representational integration between WM representations and perceptually-similar input underlies this similarity-induced memory bias. Together, our findings highlight a novel source of WM distortion and suggest a general mechanism that determines how WM representations interact with new perceptual input.
Successful social communication requires accurate perception and maintenance of invariant (face identity) and variant (facial expression) aspects of faces. While numerous studies investigated how face identity and expression information is extracted from faces during perception, less is known about the temporal aspects of the face information during perception and working memory (WM) maintenance. To investigate how face identity and expression information evolve over time, I recorded electroencephalography (EEG) while participants were performing a face WM task where they remembered a face image and reported either the identity or the expression of the face image after a short delay. Using multivariate event-related potential (ERP) decoding analyses, I found that the two types of information exhibited dissociable temporal dynamics: Although face identity was decoded better than facial expression during perception, facial expression was decoded better than face identity during WM maintenance. Follow-up analyses suggested that this temporal dissociation was driven by differential maintenance mechanisms: Face identity information was maintained in a more “activity-silent” manner compared to facial expression information, presumably because invariant face information does not need to be actively tracked in the task. Together, these results provide important insights into the temporal evolution of face information during perception and WM maintenance.
Bae & Luck (2018) reported a study of visual working memory in which the orientation being held in memory was decoded from the scalp distribution of sustained ERP activity and alphaband EEG oscillations. Decoding accuracy was compared to chance at each point during the delay interval, and a correction for multiple comparisons was applied to find clusters of consecutive above-chance time points that were stronger than would be expected by chance. However, the correction used in that study did not account for the autocorrelation of the noise and may have been overly liberal. Here, we describe a more appropriate correction procedure and apply it to the data from Bae & Luck (2018). We find that the major clusters of time points that were significantly above chance with the original correction procedure remained above chance with the updated correction procedure. However, some minor clusters that were significant with the original procedure were no longer significant with the updated procedure. We recommend that future studies use the updated correction procedure.
Computational models for motion perception suggest a possibility that read-out of motion signal can yield the perception of opposite direction of the true stimulus motion direction. However, this possibility was not obvious in a standard 2AFC motion discrimination (e.g., leftward vs. rightward). By allowing the motion direction to vary over 360° in typical random-dotkinematograms (RDKs) displays, and by asking observers to estimate the exact direction of motion, we were able to detect the presence of opposite-direction motion perception in RDKs.This opposite-direction motion perception was replicable across multiple display types and feedback conditions, and participants had greater confidence in their opposite-direction responses than in true guess responses. When we fed RDKs into a computational model of motion processing, we found that the model estimated substantial motion activity in the direction opposite to the coherent stimulus direction, even though no such motion was objectively present in the stimuli, suggesting that the opposite-direction motion perception may be a consequence of the properties of motion-selective neurons in visual cortex. Together, these results demonstrate that the perception of opposite-direction motion in RDKs is consistent with the known properties of the visual system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.