12In our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the 13 remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain 14 representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human 15 vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate 16 decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational 17 structure of a large number of stimuli, and showed the emerging abstract categorical organisation of 18 this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from 19 higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to 20 show that shorter image presentation limits the categorical abstraction of object representations. Our 21 results show that applying multivariate pattern analysis to every image in rapid serial visual processing 22 streams has unprecedented potential for studying the temporal dynamics of the structure of 23 representations in the human visual system. 24
The human brain prioritises relevant sensory information to perform di erent tasks. Enhancement of task-relevant information requires exible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the e ect of attention on the neural responses to each individual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive elds. As expected, attention increased the decodability of stimulusrelated information contained in the neural responses, but this e ect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable in-1 2 Grootswagers et al.formation about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/.
In our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.
Rapid image presentations combined with time-resolved multivariate analysis methods of EEG or MEG (rapid-MVPA) offer unique potential in assessing the temporal limitations of the human visual system. Recent work has shown that multiple visual objects presented sequentially can be simultaneously decoded from M/EEG recordings. Interestingly, object representations reached higher stages of processing for slower image presentation rates compared to fast rates. This fast rate attenuation is probably caused by forward and backward masking from the other images in the stream. Two factors that are likely to influence masking during rapid streams are stimulus duration and stimulus onset asynchrony (SOA). Here, we disentangle these effects by studying the emerging neural representation of visual objects using rapid-MVPA while independently manipulating stimulus duration and SOA. Our results show that longer SOAs enhance the decodability of neural representations, regardless of stimulus presentation duration, suggesting that subsequent images act as effective backward masks. In contrast, image duration does not appear to have a graded influence on object representations. Interestingly, however, decodability was improved when there was a gap between subsequent images, indicating that an abrupt onset or offset of an image enhances its representation. Our study yields insight into the dynamics of object processing in rapid streams, paving the way for future work using this promising approach.
Standard human EEG systems based on spatial Nyquist estimates suggest that 20–30 mm electrode spacing suffices to capture neural signals on the scalp, but recent studies posit that increasing sensor density can provide higher resolution neural information. Here, we compared “super-Nyquist” density EEG (“SND”) with Nyquist density (“ND”) arrays for assessing the spatiotemporal aspects of early visual processing. EEG was measured from 128 electrodes arranged over occipitotemporal brain regions (14 mm spacing) while participants viewed flickering checkerboard stimuli. Analyses compared SND with ND-equivalent subsets of the same electrodes. Frequency-tagged stimuli were classified more accurately with SND than ND arrays in both the time and the frequency domains. Representational similarity analysis revealed that a computational model of V1 correlated more highly with the SND than the ND array. Overall, SND EEG captured more neural information from visual cortex, arguing for increased development of this approach in basic and translational neuroscience.
How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long, Yu, & Konkle, 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system's capacity to use image features to resolve a recognisable object.
Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.
The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.