Two experiments (one using O and Q-like stimuli and the other using colored-oriented bare) investigated the oculomotor behavior accompanying parallel-serial visual search. Eye movements were recorded as participants searched for a target in 5-or 17-item displays. Results indicated the presence of parallel-serial search dichotomies and 2:1 ratios of negative to positive slopes in the number of saccades initiated during both search tasks. This saccade number measure also correlated highly with search times, accounting for up to 67% of the reaction time (RT) variability. Weak correlations between fixation durations and RTs suggest that this oculomotor measure may be related more to stimulus factors than to search processes. A third experiment compared free-eye and fixed-eye searches and found a small RT advantage when eye movements were prevented. Together these findings suggest that parallel-serial search dichotomies are reflected in oculomotor behavior.
Previous research has shown that when searching for a color singleton, top-down control cannot prevent attentional capture by an abrupt visual onset. The present research addressed whether a task-irrelevant abrupt onset would affect eye movement behavior when searching for a color singleton. Results show that in many instances the eye moved in the direction of the task-irrelevant abrupt onset. There was evidence that top-down control could neither entirely prevent attentional capture by visual onsets nor prevent the eye from starting to move in the direction of the onset. Results suggest parallel programming of 2 saccades: 1 voluntary goal-directed eye movement toward the color singleton target and 1 stimulus-driven eye movement reflexively elicited by the abrupt onset. A neurophysiologically plausible model that can account for the current findings is discussed.
Understanding how goal states control behavior is a question ripe for interrogation by new methods from machine learning. These methods require large and labeled datasets to train models. To annotate a large-scale image dataset with observed search fixations, we collected 16,184 fixations from people searching for either microwaves or clocks in a dataset of 4,366 images (MS-COCO). We then used this behaviorally-annotated dataset and the machine learning method of Inverse-Reinforcement Learning (IRL) to learn target-specific reward functions and policies for these two target goals. Finally, we used these learned policies to predict the fixations of 60 new behavioral searchers (clock = 30, microwave = 30) in a disjoint test dataset of kitchen scenes depicting both a microwave and a clock (thus controlling for differences in low-level image contrast). We found that the IRL model predicted behavioral search efficiency and fixation-density maps using multiple metrics. Moreover, reward maps from the IRL model revealed target-specific patterns that suggest, not just attention guidance by target features, but also guidance by scene context (e.g., fixations along walls in the search of clocks). Using machine learning and the psychologically-meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.