Ignoring salient distracting information is paramount to efficiently guiding attention during visual search. Learning to reject or suppress these strong sources of distraction leads to more effective visual search for targets. Participants can learn to overcome salient distractors if given reliable search regularities. If salient distractors appear in 1 location more frequently than any other, the visual system can use this environmental regularity to reduce attentional capture at the more frequent location (Wang & Theeuwes, 2018). We asked if reduced attentional capture is limited to location-based regularities, or, if the visual attentional system is configured to use feature-based regularities in reducing attentional capture as well.In 4 experiments examining attentional capture by task-irrelevant color singletons, participants searched for a shape singleton target among homogenously colored distractors. Critically, on a proportion of trials, a salient, color singleton distractor was presented. Color singleton distractors that appeared at a frequent location captured attention less than color singleton distractors that appeared at infrequent locations, replicating previous findings. In subsequent experiments we manipulated the frequency of the colors of the color singleton distractors and observed robust increases in capture based on color feature regularities. Despite strong location information, we observed reliable attentional capture attenuation by frequently presented distractor colors. Our results suggest that attentional capture is attenuated by both location and feature information. Public Significance StatementTheories of attention discuss both guidance to target stimuli and away from distracting stimuli based information extracted from the environment. For example, in searching for a friend in a crowded city street, one might search for a specific color shirt they were wearing, and in doing so, one might avoid searching for other colors. The same search strategy could be applied to locations: One might search for plausible locations (e.g., walking among others) and avoid implausible locations (e.g., on the sides of buildings). Recently, location-based information has been shown to reduce distraction by stimuli in frequently presented locations. Attentional guidance based on feature-based information is also important. Our work demonstrates that both location-and feature-based sources of information can be used to efficiently reduce the distraction elicited by salient distractors. Our results suggest that the visual attentional system is adept at extracting and using feature-and location-based statistical regularities to reduce distraction associated with those sources of information.
In five experiments, we examined whether a task-irrelevant item in visual working memory (VWM) interacts with perceptual selection when VWM must also be used to maintain a template representation of a search target. This question is critical to distinguishing between competing theories specifying the architecture of interaction between VWM and attention. The single-item template hypothesis (SIT) posits that only a single item in VWM can be maintained in a state that interacts with attention. Thus, the secondary item should be inert with respect to attentional guidance. The multiple-item template hypothesis (MIT) posits that multiple items can be maintained in a state that interacts with attention; thus, both the target representation and the secondary item should be capable of guiding selection. This question has been addressed previously in attention capture studies, but the results have been ambiguous. Here, we modified these earlier paradigms to optimize sensitivity to capture. Capture by a distractor matching the secondary item in VWM was observed consistently across multiple types of search task (abstract arrays and natural scenes), multiple dependent measures (search reaction time (RT) and oculomotor capture), multiple memory dimensions (color and shape), and multiple search stimulus dimensions (color, shape, common objects), providing strong support for the MIT. (PsycINFO Database Record
Visual search through real-world scenes is guided both by a representation of target features and by knowledge of the sematic properties of the scene (derived from scene gist recognition). In 3 experiments, we compared the relative roles of these 2 sources of guidance. Participants searched for a target object in the presence of a critical distractor object. The color of the critical distractor either matched or mismatched (a) the color of an item maintained in visual working memory for a secondary task (Experiment 1), or (b) the color of the target, cued by a picture before search commenced (Experiments 2 and 3). Capture of gaze by a matching distractor served as an index of template guidance. There were 4 main findings: (a) The distractor match effect was observed from the first saccade on the scene, (b) it was independent of the availability of scene-level gist-based guidance, (c) it was independent of whether the distractor appeared in a plausible location for the target, and (d) it was preserved even when gist-based guidance was available before scene onset. Moreover, gist-based, semantic guidance of gaze to target-plausible regions of the scene was delayed relative to template-based guidance. These results suggest that feature-based template guidance is not limited to plausible scene regions after an initial, scene-level analysis. (PsycINFO Database Record
Theories of working memory (WM) differ in their claims about the number of items that can be maintained in a state that directly interacts with other, ongoing cognitive operations (termed the focus of attention). A similar debate has arisen in the literature on visual working memory (VWM), focused on the number of items that can simultaneously interact with attentional priority. In 3 experiments, we used a redundancy-gain paradigm to provide a comprehensive test of the latter question. Participants searched for 2 cued features (e.g., a color and a shape) within a search array. The cued feature values changed on a trial-by-trial basis, requiring VWM. The target (when present) could match 1 of the cued features (single-target trials) or both cued features (redundant-target trials). We tested whether response time distributions contained a substantial proportion of trials with redundant-target responses that were faster than predicted by 2 independent guidance processes operating in parallel (i.e., violations of the racemodel inequality). Violations are consistent with a coactive architecture in which both cued values guide attention in parallel and sum on the priority map. Robust violations were observed in all cases predicted by the hypothesis that multiple items in VWM can guide attention simultaneously, and these results were inconsistent with the hypothesis that guidance is limited to a single item simultaneously. When considered in the larger context of the literature on VWM and attention, the present results are consistent with a model of WM architecture in which the focus of attention can maintain multiple, independent representations.
Computer classifiers have been successful at classifying various tasks using eye movement statistics. However, the question of human classification of task from eye movements has rarely been studied. Across two experiments, we examined whether humans could classify task based solely on the eye movements of other individuals. In Experiment 1, human classifiers were shown one of three sets of eye movements: Fixations, which were displayed as blue circles, with larger circles meaning longer fixation durations; Scanpaths, which were displayed as yellow arrows; and Videos, in which a neon green dot moved around the screen. There was an additional Scene manipulation in which eye movement properties were displayed either on the original scene where the task (Search, Memory, or Rating) was performed or on a black background in which no scene information was available. Experiment 2 used similar methods but only displayed Fixations and Videos with the same Scene manipulation. The results of both experiments showed successful classification of Search. Interestingly, Search was best classified in the absence of the original scene, particularly in the Fixation condition. Memory also was classified above chance with the strongest classification occurring with Videos in the presence of the scene. Additional analyses on the pattern of correct responses in these two conditions demonstrated which eye movement properties successful classifiers were using. These findings demonstrate conditions under which humans can extract information from eye movement characteristics in addition to providing insight into the relative success/failure of previous computer classifiers.
Visual working memory (VWM) has been implicated both in the online representation of object tokens (in the object-file framework) and in the top-down guidance of attention during visual search, implementing a feature template. It is well established that object representations in VWM are structured by location, with access to the content of VWM modulated by position consistency. In the present study, we examined whether this property generalizes to the guidance of attention. Specifically, in two experiments, we probed whether the guidance of spatial attention from features in VWM is modulated by the position of the object from which these features were encoded. Participants remembered an object with an incidental color. Items in a subsequent search array could match either the color of the remembered object, the location, or both. Robust benefits of color match (when the matching item was the target) and costs (when the matching items was a distractor) were observed. Critically, the magnitude of neither effect was influenced by spatial correspondence. The results demonstrate that features in VWM influence attentional priority maps in a manner that does not necessarily inherit the spatial structure of the object representations in which those features are maintained.
Recent statistical regularities have been demonstrated to influence visual search across a wide variety of learning mechanisms and search features. To function in the guidance of real-world search, however, such learning must be contingent on the context in which the search occurs and the object that is the target of search. The former has been studied extensively under the rubric of contextual cuing. Here, we examined, for the first time, categorical cuing: The role of object categories in structuring the acquisition of statistical regularities used to guide visual search. After an exposure session in which participants viewed six exemplars with the same general color in each of 40 different real-world categories, they completed a categorical search task, in which they searched for any member of a category based on a label cue. Targets that matched recent within-category regularities were found faster than targets that did not (Experiment 1). Such categorical cuing was also found to span multiple recent colors within a category (Experiment 2). It was observed to influence both the guidance of search to the target object (Experiment 3) and the basic operation of assigning single exemplars to categories (Experiment 4). Finally, the rapid acquisition of category-specific regularities was also quickly modified, with the benefit rapidly decreasing during the search session as participants were exposed equally to the two possible colors in each category. The results demonstrate that object categories organize the acquisition of perceptual regularities and that this learning exerts strong control over the instantiation of the category representation as a template for visual search.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.