The human visual system receives more information than can be consciously processed. To overcome this capacity limit, we employ attentional mechanisms to prioritize task-relevant (target) information over less relevant (distractor) information. Regularities in the environment can facilitate the allocation of attention, as demonstrated by the spatial contextual cueing paradigm. When observers are exposed repeatedly to a scene and invariant distractor information, learning from earlier exposures enhances the search for the target. Here, we investigated whether spatial contextual cueing draws on spatial working memory resources and, if so, at what level of processing working memory load has its effect. Participants performed 2 tasks concurrently: a visual search task, in which the spatial configuration of some search arrays occasionally repeated, and a spatial working memory task. Increases in working memory load significantly impaired contextual learning. These findings indicate that spatial contextual cueing utilizes working memory resources.
The visual world is typically too complex to permit full apprehension of its content from a single fixation. Humans therefore use visual search to direct attention and eye movements to locations or objects of interest in cluttered scenes. Psychophysical investigations have revealed that observers can select target elements from within an array of distractors on the basis of their spatial location or simple features, such as color. It remains unclear, however, how stimuli that lie outside the current search array are represented in the visual system. To investigate this, we recorded continuous neural activity using EEG while participants searched a foveal array of colored targets and distractors, and ignored irrelevant objects in the periphery. Search targets were defined either by a unique feature within the array or by a conjunction of features. Objects outside the array could match the target or distractor color within the array, or otherwise possessed a baseline (neutral) color present only in the periphery. The search array and irrelevant peripheral objects flickered at unique rates and thus evoked distinct frequency-tagged neural oscillations. During conjunction but not unique-feature search, target-colored objects outside the array evoked enhanced activity relative to distractor-colored and neutral objects. The results suggest that feature-based selection applies to stimuli at ignored peripheral locations, but only when central targets compete with distractors within the array. Distractor-colored and neutral objects evoked equivalent oscillatory responses, suggesting that feature-based selection at ignored locations during visual search arises exclusively from enhancement rather than suppression of neural activity.
An observer's current goals can influence the processing of visual stimuli. Such influences can work to enhance goal-relevant stimuli and suppress goal-irrelevant stimuli. Here, we combined behavioral testing and electroencephalography (EEG) to examine whether such enhancement and suppression effects arise even when the stimuli are masked from awareness. We used a feature-based spatial cueing paradigm, in which participants searched four-item arrays for a target in a specific color. Immediately before the target array, a nonpredictive cue display was presented in which a cue matched or mismatched the searched-for target color, and appeared either at the target location (spatially valid) or another location (spatially invalid). Cue displays were masked using continuous flash suppression. The EEG data revealed that target-colored cues produced robust N2pc and N T responses-both signatures of spatial orienting-and distractor-colored cues produced a robust P D-a signature of suppression. Critically, the cueing effects occurred for both conscious and unconscious cues. The N2pc and N T were larger in the aware versus unaware cue condition, but the P D was roughly equivalent in magnitude across the two conditions. Our findings suggest that top-down control settings for task-relevant features elicit selective enhancement and suppression even in the absence of conscious perception. We conclude that conscious perception modulates selective enhancement of visual features, but suppression of those features is largely independent of awareness.
Singleton detection mode is a state in which spatial attention is set to prioritize any objects that differ from all other objects present on any feature dimension. Relatively little research has been devoted to confirming the consequences such a search mode has for stimulus processing. It is often implied that when observers employ singleton detection mode, all singletons capture attention equally, and when observers search for a single feature, only that feature captures attention. The experiment presented here contradicts these implications. We had observers search for colored singleton targets preceded by spatially uninformative colored singleton cues, and we recorded stimulus-evoked neural responses using electroencephalography (EEG). When observers had to respond to targets defined by two possible colors (a task intended to encourage singleton detection mode), cue validity effects were apparent for both target-color cues and irrelevant-color cues, and these effects were accompanied by an N2pc in the EEG data. Importantly, however, the target-color cues evoked significantly larger cue validity effects and N2pc components than did the irrelevant-color cues. In contrast, when observers had to respond to targets defined by one color (a task intended to encourage feature search mode), only cues of that color evoked a cue validity effect. Interestingly, the N2pcs produced by irrelevant cues did not differ between feature and singleton search, suggesting that the behavioral difference was not due to different attentional orienting. Rather, we suggest that behavioral singleton capture is due to a diminished same-location cost being produced by irrelevant-color cues.
The relationship between visual attention and conscious perception has been the subject of debate across a number of fields, including philosophy, psychology, and neuroscience. Whereas some researchers view attention and awareness as inextricably linked, others propose that the two are supported by distinct neural mechanisms that can be fully dissociated. In a pioneering study, van Boxtel, Tsuchiya, and Koch (2010b) reported evidence for a dissociation between attention and conscious perception using a perceptual adaptation task in which participants' perceptual awareness and visual attention were manipulated independently. They found that participants' awareness of an adapting stimulus increased afterimage duration, whereas attending to the adaptor decreased it. Given the important theoretical implications of these findings, we endeavored to replicate them using an identical paradigm while dealing with some potential shortcomings of the original study by adding more trials and a larger participant sample. Consistent with van Boxtel, Tsuchiya, and Koch, we found that afterimage duration was reliably increased when participants were aware of the adapting stimulus. In contrast to the original findings, however, attention to the adaptor also increased afterimage duration, suggesting that attention and awareness had the same-rather than opposing-effects on afterimage duration. We discuss possible reasons for this discrepancy. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.