2021
DOI: 10.3758/s13423-020-01859-9
|View full text |Cite
|
Sign up to set email alerts
|

Guided Search 6.0: An updated model of visual search

Abstract: This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

12
356
2
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 370 publications
(440 citation statements)
references
References 277 publications
12
356
2
2
Order By: Relevance
“…How saliencies of multiple relevant objects interact has-to the best of our knowledge-not yet been systematically examined, and an observation of an effect of relative saliency is therefore new not only for the VWM community but also for the visual cognition community in general. Many theories of visual search (e.g., Duncan & Humphreys, 1989;Fecteau & Munoz, 2006;Liesefeld & Müller, 2019b, 2021Wolfe, 2021) assume a preattentive spatial representation of the visual scene coding for relevance at each location and informing a second, attentive-processing stage. This assumption is needed to explain how second-stage focal attention can be allocated to the most promising objects in view without analyzing each object in detail first.…”
Section: Discussionmentioning
confidence: 99%
“…How saliencies of multiple relevant objects interact has-to the best of our knowledge-not yet been systematically examined, and an observation of an effect of relative saliency is therefore new not only for the VWM community but also for the visual cognition community in general. Many theories of visual search (e.g., Duncan & Humphreys, 1989;Fecteau & Munoz, 2006;Liesefeld & Müller, 2019b, 2021Wolfe, 2021) assume a preattentive spatial representation of the visual scene coding for relevance at each location and informing a second, attentive-processing stage. This assumption is needed to explain how second-stage focal attention can be allocated to the most promising objects in view without analyzing each object in detail first.…”
Section: Discussionmentioning
confidence: 99%
“…Visual search has played a central role in attentional theory for decades (Schneider & Shiffrin, 1977;Treisman & Gelade, 1980). Spatial configuration search, where targets are distinguished from distractors only by the internal arrangement of components, is widely held to index serial shifts of covert attention (Bricolo et al, 2002;Wolfe, 2021;Woodman & Luck, 1999). Here we employed the widely used spatial configuration search for T-shaped targets among L-shaped distractors.…”
Section: Introductionmentioning
confidence: 99%
“…It also quantifies the speed of processing (baseline search speed) that is a known behavioral marker of aging. Visual search tasks are easily extended to obtain indices for multiple other domains including, for example, indices of perceptual interference by varying the target-distractor similarity or to obtain indices of reward-based capturing of attention by varying the expected value of distractors (Wolfe and Horowitz, 2017;Wolfe, 2021).…”
Section: Enrichment and Assessment Of Multiple Cognitive Domainsmentioning
confidence: 99%