2016
DOI: 10.1167/16.12.337
|View full text |Cite
|
Sign up to set email alerts
|

How you use it matters: Object Function Guides Attention during Visual Search in Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
24
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(29 citation statements)
references
References 0 publications
5
24
0
Order By: Relevance
“…‘affordances’ [4,71]. Action affordance has shown to be an important factor in how we understand objects [72], and object affordance influences how we search for items in visual scenes [73]. While several studies have considered how visual scenes serve as a context facilitating recognition of both objects [18,7476] and actions [77], the idea that affordances determine how we understand the scene itself is relatively unexplored.…”
Section: Goal 4: What Can I Do Here?mentioning
confidence: 99%
“…‘affordances’ [4,71]. Action affordance has shown to be an important factor in how we understand objects [72], and object affordance influences how we search for items in visual scenes [73]. While several studies have considered how visual scenes serve as a context facilitating recognition of both objects [18,7476] and actions [77], the idea that affordances determine how we understand the scene itself is relatively unexplored.…”
Section: Goal 4: What Can I Do Here?mentioning
confidence: 99%
“…Moreover, the scene preview benefit exists even if the target object was not visible during the preview (i.e., digitally removed), but only found through windowed search, thereby confirming the benefit of scene-context processing, irrespective of any additional local target processing that could occur when targets are present in previews (Castelhano & Henderson, 2007;Võ & Henderson, 2010). The FPMW has also been used to demonstrate how semantically consistent and inconsistent objects are processed within scenes (Castelhano & Heaven 2011;Võ & Henderson, 2011), and how learned object function may guide attention aside from object features (Castelhano & Witherspoon, 2016). The ability to process the scene preview has been linked to individual differences in visual perceptual processing speed (Võ & Schneider, 2010), and the time-course of the initial representation derived from the scene preview has also been investigated.…”
Section: Flash-preview Moving Windowmentioning
confidence: 76%
“…This sheds new light on recent research that manipulated the contents of target and scene knowledge itself, rather than the order in which target knowledge is activated (Litchfield & Donovan, 2016). If search is guided by target templates (Malcom & Henderson, 2009) and scene-gist information (Bahle, Matsukura, & Hollingworth, in press) then the effectiveness of search should be dependent on the expertise of the individual and their ability to take advantage of this knowledge (Castelhano & Witherspoon, 2016). The findings from the present study show that weak scene preview effects previously observed by Litchfield and Donovan (2016) are unlikely to be due to methodological issues relating to repeated search or knowing beforehand the identity of the upcoming image and target.…”
Section: Discussionmentioning
confidence: 91%
“…Following a 250ms glimpse of the upcoming scene, participants were quicker to initiate search and quicker to fixate the target. In addition, knowing target identity before scene preview led to further improvements in how search was executed, but such improvements did not carry over to RTs, suggesting this metric may not be as sensitive as search latency, as it also includes variability in verification time (Castelhano & Heaven, 2010;Castelhano & Witherspoon, 2016).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation