2014
DOI: 10.1016/j.actpsy.2014.10.002
|View full text |Cite
|
Sign up to set email alerts
|

Flexible cue combination in the guidance of attention in visual search

Abstract: Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
10
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 30 publications
1
10
0
Order By: Relevance
“…These results are inconsistent with current feature-specific accounts of conjunction search, and show that information about the relative features of all items is extracted rapidly and automatically (i.e., when the stimuli are presented only briefly, and when they are irrelevant to the task), in line with a relational account of conjunction search. The present findings and the results of Brand et al (2014) show that relative features can still be extracted even when none of the items in the display has a unique feature, and when they differ on irrelevant stimulus dimensions (e.g., the relative size of items can be used for guidance even when the stimuli also vary in colour and orientation; Brand et al, 2014). These findings provide very strong evidence for the existence of a mechanism that rapidly and automatically extracts information about feature relationships.…”
Section: Discussionsupporting
confidence: 62%
See 1 more Smart Citation
“…These results are inconsistent with current feature-specific accounts of conjunction search, and show that information about the relative features of all items is extracted rapidly and automatically (i.e., when the stimuli are presented only briefly, and when they are irrelevant to the task), in line with a relational account of conjunction search. The present findings and the results of Brand et al (2014) show that relative features can still be extracted even when none of the items in the display has a unique feature, and when they differ on irrelevant stimulus dimensions (e.g., the relative size of items can be used for guidance even when the stimuli also vary in colour and orientation; Brand et al, 2014). These findings provide very strong evidence for the existence of a mechanism that rapidly and automatically extracts information about feature relationships.…”
Section: Discussionsupporting
confidence: 62%
“…Critically, Brand et al (2014) pointed to a third possible limitation of relational search, in that they claimed that conjunction search may be feature-specific, not relational. In a conjunction search task, the target differs only in a combination of two or more features from the non-targets, such as the particular colour and orientation of the target (e.g., red vertical target among horizontal red and vertical green non-targets; Treisman & Gelade, 1980;Wolfe, 1994).…”
Section: Significance Statementmentioning
confidence: 99%
“…To rule out the possibility that saliency-related effects were explicable by other confounding factors we also computed two other indexes for each scene, namely the "target size" and "horizontal target eccentricity" indexes. Previous studies have shown that these indexes affect visual search performance, with bigger targets and more eccentric targets (i.e., those close to the display center) being easier to be detected (see, e.g., Brand, Oriet, Johnson & Wolfe, 2014;Gruber, Muri, Mosimasnn, Bieri, Aesschimann, Zito, Urwyler, Nyffeler & Nef, 2014). These indexes were therefore included in the regression model described below.…”
Section: Computation Of Low-level Sensory Saliency Indexesmentioning
confidence: 99%
“…In practice, feature-specific tuning to the exact target feature could only be observed when a relational search strategy had been prevented; for instance, when the features of the nontargets were varied such that the target was not reliably the largest or smallest item (or the reddest or yellowest item) anymore (e.g., Becker, Harris, Venini, & Retell,2014;Harris, Remington & Becker, 2013). However, compared with relational search, feature-specific tuning can result in less efficient search (e.g., Becker, Harris, Venini, & Retell, 2014b), and these differences in search efficiency between relational versus feature-specific search strategies may also (at least in part) explain the linear separability effect (D'Zmura, 1991)-that search is more efficient when the target has a relatively extreme feature value in feature space (linearly separable target; e.g., largest/smallest; steepest/flattest; darkest/brightest) than when it has an intermediate feature value (nonlinearly separable target; e.g., medium target among small and large nontargets; tilted target among vertical and horizontal nontargets; Bauer, Jolicoeur, & Cowan, 1996;Becker, 2010;Brand et al, 2014;Hodsoll & Humphreys, 2001).…”
mentioning
confidence: 99%