Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest.
For decades, the extent to which visual imagery relies on similar neural mechanisms as visual perception has been a topic of debate. Here, we review recent neuroimaging studies comparing these two forms of visual experience. Their results suggest that there is large overlap in neural processing during perception and imagery: neural representations of imagined and perceived stimuli are similar in visual, parietal and frontal cortex. Furthermore, perception and imagery seem to rely on similar top-down connectivity. The most prominent difference is the absence of bottom-up processing during imagery. These findings fit well with the idea that imagery and perception rely on similar emulation or prediction processes. Externally and internally generated visual experience A large part of our sensory experience is visual. When walking down the street, we are bombarded with different colors, shapes and textures. Also when thinking about future or past events, most people tend to experience a rapid stream of detailed images [1]. Visual experience can be triggered externally, by events in the outside world that change the light that falls unto our retinas, such as during perception (see Glossary); or internally, by information from memory via a process known as mental imagery (see Box 1 on the relationship between imagery and working memory). Generally, these are seen as two distinct phenomena. However, they are phenomenologically similar which can sometimes lead us to question whether we really saw something or whether it was just our imagination. The question to what extent visual imagery relies on the same neural mechanisms as perception has been a topic of debate for decades. Originally, the debate was centered around the question whether imagery, like perception, relied on depictive, picture-like representations, or on symbolic, language-like representations [2-5]. Due to imagery's inherently private nature, for a long time it was impossible to address this question. Neuroimaging studies on the involvement of the primary visual cortex during imagery have now largely resolved this debate in favor of the depictive view [6]. However, a broader perspective, addressing the involvement and interaction of brain regions beyond the primary visual cortex, has been missing. The current review explores to what extent externally and internally generated visual experiences rely on similar neural mechanisms. We discuss the findings with respect to visual areas which are important in the depictivism-versus-symbolism debate, but we also focus on the involvement of parietal and frontal areas. Next, we focus on the temporal dynamics of neural processing during both forms of visual experience. After that, we discuss the overlap in directional connectivity between perception and imagery. We finish by concluding that perception and imagery are in fact highly similar and we discuss the issues and questions raised by this conclusion.
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery.
The cortical reinstatement hypothesis of memory retrieval posits that content-specific cortical activity at encoding is reinstated at retrieval. Evidence for cortical reinstatement was found in higher-order sensory regions, reflecting reactivation of complex object-based information. However, it remains unclear whether the same detailed sensory, feature-based information perceived during encoding is subsequently reinstated in early sensory cortex and what the role of the hippocampus is in this process. In this study, we used a combination of visual psychophysics, functional neuroimaging, multivoxel pattern analysis, and a well controlled cued recall paradigm to address this issue. We found that the visual information human participants were retrieving could be predicted by the activation patterns in early visual cortex. Importantly, this reinstatement resembled the neural pattern elicited when participants viewed the visual stimuli passively, indicating shared representations between stimulus-driven activity and memory. Furthermore, hippocampal activity covaried with the strength of stimulus-specific cortical reinstatement on a trial-by-trial level during cued recall. These findings provide evidence for reinstatement of unique associative memories in early visual cortex and suggest that the hippocampus modulates the mnemonic strength of this reinstatement.
Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade was captured by the network layer representations, where the increasingly abstract stimulus representation in the hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out validation set of viewed objects, achieving state-of-the-art decoding accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.