2015
DOI: 10.3758/s13414-015-0957-7
|View full text |Cite
|
Sign up to set email alerts
|

Statistical learning modulates the direction of the first head movement in a large-scale search task

Abstract: Foraging and search tasks in everyday activities are often performed in large, open spaces, necessitating head and body movements. Such activities are rarely studied in the laboratory, leaving important questions unanswered regarding the role of attention in large-scale tasks. Here we examined the guidance of visual attention by statistical learning in a large-scale, outdoor environment. We used the orientation of the first head movement as a proxy for spatial attention and examined its correspondence with rea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 31 publications
0
9
0
Order By: Relevance
“…First, one experiment did find some evidence of using an allocentric probabilistic cue 55 , specifically when a portion of the space was also cued with a different colour. Second, another found positive results in a large-scale environment when participants started from the same place but in random directions on each trial 58 . It is arguable whether this represents the use of an allocentric frame, since it still allows the same view of the space after turning.…”
Section: Closely Related Studies and Resultsmentioning
confidence: 98%
“…First, one experiment did find some evidence of using an allocentric probabilistic cue 55 , specifically when a portion of the space was also cued with a different colour. Second, another found positive results in a large-scale environment when participants started from the same place but in random directions on each trial 58 . It is arguable whether this represents the use of an allocentric frame, since it still allows the same view of the space after turning.…”
Section: Closely Related Studies and Resultsmentioning
confidence: 98%
“…A significant RT advantage toward the high-probability quadrant emerged in the first biased block, but this effect did not reach significance until the third unbiased block. Previous studies using the two-phase design, wherein a long training phase is followed by a testing phase, have reported significant probability cuing in the first training (biased) block (e.g., Won, Lee, & Jiang, 2015). However, this RT advantage most likely reflects location repetition priming (Walthew & Gilchrist, 2006).…”
Section: Discussionmentioning
confidence: 99%
“…However, it remains unclear whether location probability learning is gradually acquired like other habits (Graybiel, 2008; Seger & Spiering, 2011). Whereas most habits take many repetitions to form, the search advantage in high-probability locations emerges rapidly, sometimes becoming significant after a dozen trials (Won, Lee, & Jiang, 2015). Given the early onset, some researchers question whether participants acquire any statistical learning.…”
Section: Introductionmentioning
confidence: 99%
“…Instead, it was suggested that the fixed benefit in RT might reflect a facilitation that takes place only after the target is detected (Kunar et al, 2007). However, it is important to note that others have argued that learning in visual search does in fact influence attentional guidance (Chun & Jiang, 1998;Jiang, Won, & Swallow, 2014;Peterson & Kramer, 2001;Won, Lee, & Jiang, 2015), and the finding that CC does not interact with set-size may be the result of the increased difficulty to recognize the context in larger set-sizes (Makovski & Jiang, 2010). That is, given the premise that only a few close items are actually learned in CC (Brady & Chun, 2007;Olson & Chun, 2002), it is likely that it is more difficult to recognize the relevant pattern in more crowded displays, thereby cancelling out the attentional guidance benefit (Jiang, Makovski, & Shim, 2009).…”
Section: Discussionmentioning
confidence: 99%