The temporal and spatial neural processing of faces has been investigated rigorously, but few studies have unified these dimensions to reveal the spatio-temporal dynamics postulated by the models of face processing. We used support vector machine decoding and representational similarity analysis to combine information from different locations (fMRI), time windows (EEG), and theoretical models. By correlating information matrices derived from pairwise classifications of neural responses to different facial expressions (neutral, happy, fearful, angry), we found early EEG time windows (starting around 130 ms) to match fMRI data from early visual cortex (EVC), and later time windows (starting around 190ms) to match data from occipital and fusiform face areas (OFA/FFA) and posterior superior temporal sulcus (pSTS).According to model comparisons, the EEG classifications were based more on low-level visual features than expression intensities or categories. In fMRI, the model comparisons revealed change along the processing hierarchy, from low-level visual feature coding in EVC to coding of intensity of expressions in the right pSTS. The results highlight the importance of a multimodal approach for understanding the functional roles of different brain regions in face processing.
Many studies of visual working memory have tested humans' ability to reproduce primary visual features of simple objects, such as the orientation of a grating or the hue of a color patch, following a delay. A consistent finding of such studies is that precision of responses declines as the number of items in memory increases. Here we compared visual working memory for primary features and high-level objects. We presented participants with memory arrays consisting of oriented gratings, facial expressions, or a mixture of both. Precision of reproduction for all facial expressions declined steadily as the memory load was increased from one to five faces. For primary features, this decline and the specific distributions of error observed, have been parsimoniously explained in terms of neural population codes. We adapted the population coding model for circular variables to the non-circular and bounded parameter space used for expression estimation. Total population activity was held constant according to the principle of normalization and the intensity of expression was decoded by drawing samples from the Bayesian posterior distribution. The model fit the data well, showing that principles of population coding can be applied to model memory representations at multiple levels of the visual hierarchy. When both gratings and faces had to be remembered, an asymmetry was observed. Increasing the number of faces decreased precision of orientation recall, but increasing the number of gratings did not affect recall of expression, suggesting that memorizing faces involves the automatic encoding of low-level features, in addition to higher-level expression information.
Acknowledgements:We thank Mia Illman for help in MEG measurements and Veli-Matti Saarinen for help in eye-tracking measurements. The annotations of the stimulus images were created using the Object Labeling Tool from Derek Hoiem, University of Illinois. ABSTRACTNatural visual behaviour entails explorative eye movements, saccades, that bring different parts of a visual scene into the central vision. The neural processes guiding the selection of saccade targets are still largely unknown. Therefore, in this study, we tracked with magnetoencephalography (MEG) cortical dynamics of viewers who were freely exploring novel natural scenes. Overall, the viewers were largely consistent in their gaze behaviour, especially if the scene contained any persons. We took a fresh approach to relate the eyegaze data to the MEG signals by characterizing dynamic cortical representations by means of representational distance matrices. Specifically, we compared the representational distances between the stimuli in the evoked MEG responses with predictions based (1) on the low-level visual similarity of the stimuli (as visually more similar stimuli evoke more similar responses in early visual areas) and (2) on the eye-gaze data. At 50-75 ms after the scene onset, the similarity of the occipital MEG patterns correlated with the low-level visual similarity of the scenes, and already at 75-100 ms the visual features attracting the first saccades predicted the similarity of the right parieto-occipital MEG responses. Thereafter, at 100-125 ms, the landing positions of the upcoming saccades explained MEG responses. These results indicate that MEG signals contain signatures of the rapid processing of natural visual scenes as well as of the initiation of the first saccades, with the processing of the saccade target preceding the processing of the landing position of the upcoming saccade.Humans naturally make eye movements to bring different parts of a visual scene to the fovea where our visual acuity is the best. Tracking of eye gaze can reveal how we make inferences about the content of a scene by looking at different objects, or which visual cues automatically attract our attention and gaze. The brain dynamics governing natural gaze behaviour is still largely unknown. Here we suggest a novel approach to relate eyetracking results with brain activity, as measured with magnetoencephalography (MEG), and demonstrate signatures of natural gaze behaviour in the MEG data already before the eye movements occur.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.