The present study examines eye movement behavior in real-world scenes with a large (N = 100) sample. We report baseline measures of eye movement behavior in our sample, including mean fixation duration, saccade amplitude, and initial saccade latency. We also characterize how eye movement behaviors change over the course of a 12 s trial. These baseline measures will be of use to future work studying eye movement behavior in scenes in a variety of literatures. We also examine effects of viewing task on when and where the eyes move in real-world scenes: participants engaged in a memorization and an aesthetic judgment task while viewing 100 scenes. While we find no difference at the mean-level between the two tasks, temporal-and distribution-level analyses reveal significant task-driven differences in eye movement behavior.
Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention.
As an object rotates, each location on the object moves with an instantaneous linear velocity dependent upon its distance from the center of rotation, while the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different sized objects, as changing the size of an object changes the linear velocity of each location on the object’s surface, while maintaining the object’s angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object.
During real-world scene perception, viewers actively direct their attention through a scene in a controlled sequence of eye fixations. During each fixation, local scene properties are attended, analyzed, and interpreted. What is the relationship between fixated scene properties and neural activity in the visual cortex? Participants inspected photographs of real-world scenes in an MRI scanner while their eye movements were recorded. Fixation-related fMRI was used to measure activation as a function of lower- and higher-level scene properties at fixation, operationalized as edge density and meaning maps, respectively. We found that edge density at fixation was most associated with activation in early visual areas, whereas semantic content at fixation was most associated with activation along the ventral visual stream including core object and scene-selective areas (lateral occipital complex, parahippocampal place area, occipital place area, and retrosplenial cortex). The observed activation from semantic content was not accounted for by differences in edge density. The results are consistent with active vision models in which fixation gates detailed visual analysis for fixated scene regions, and this gating influences both lower and higher levels of scene analysis.
The brain is organized such that it encodes and maintains category information about thousands of objects. However, how learning shapes these neural representations of object categories is unknown. The present study focuses on faces, examining whether: (1) Enhanced categorical discrimination or (2) Feature analysis enhances face/non-face categorization in the brain. Stimuli ranged from non-faces to faces with two-toned Mooney images used for testing and gray-scale images used for training. The stimulus set was specifically chosen because it has a true categorical boundary between faces and non-faces but the stimuli surrounding that boundary have very similar features, making the boundary harder to learn. Brain responses were measured using functional magnetic resonance imaging while participants categorized the stimuli before and after training. Participants were either trained with a categorization task, or with non-categorical semblance analyzation. Interestingly, when participants were categorically trained, the neural activity pattern in the left fusiform gyrus shifted from a graded representation of the stimuli to a categorical representation. This corresponded with categorical face/non-face discrimination, critically including both an increase in selectivity to faces and a decrease in false alarm response to non-faces. By contrast, while activity pattern in the right fusiform cortex correlated with face/non-face categorization prior to training, it was not affected by learning. Our results reveal the key role of the left fusiform cortex in learning face categorization. Given the known right hemisphere dominance for face-selective responses, our results suggest a rethink of the relationship between the two hemispheres in face/non-face categorization. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.