2007
DOI: 10.3758/bf03192955
|View full text |Cite
|
Sign up to set email alerts
|

Auditory-visual contextual cuing effect

Abstract: Under incidental learning conditions, a spatial layout can be acquired implicitly and facilitate visual searches (the contextual cuing effect). Whereas previous studies have shown a cuing effect in the visual domain, the present study examined whether a contextual cuing effect could develop from association between auditory events and visual target locations (Experiments 1 and 2). In the training phase, participants searched for a T among Ls, preceded by 2 sec of auditory stimulus. The target location could be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 31 publications
1
11
0
Order By: Relevance
“…The finding that contextual retrieval works only with high-contrast (but not low-contrast) displays in the photopic environment is theoretically important, given that contextual cueing is thought to be a robust phenomenon, manifesting in various complex environments (Goujon et al, 2015 ). For instance, contextual cueing has been reported in search tasks with multiple redundant contexts (e.g., scene-based plus space-based contexts in Brooks et al, 2010 ), across different types of sensory modalities (e.g., tactile modality in Assumpção, Shi, Zang, Müller, & Geyer, 2015 , 2018 ; and auditory modality in Kawahara, 2007 ), and even when only part of the context in the search arrays remained constant (Brady & Chun, 2007 ; Jiang & Chun, 2001 ). It should be noted that, in most of the previous studies, (visual) contextual-cueing effects were observed with high-contrast configurations.…”
Section: Discussionmentioning
confidence: 99%
“…The finding that contextual retrieval works only with high-contrast (but not low-contrast) displays in the photopic environment is theoretically important, given that contextual cueing is thought to be a robust phenomenon, manifesting in various complex environments (Goujon et al, 2015 ). For instance, contextual cueing has been reported in search tasks with multiple redundant contexts (e.g., scene-based plus space-based contexts in Brooks et al, 2010 ), across different types of sensory modalities (e.g., tactile modality in Assumpção, Shi, Zang, Müller, & Geyer, 2015 , 2018 ; and auditory modality in Kawahara, 2007 ), and even when only part of the context in the search arrays remained constant (Brady & Chun, 2007 ; Jiang & Chun, 2001 ). It should be noted that, in most of the previous studies, (visual) contextual-cueing effects were observed with high-contrast configurations.…”
Section: Discussionmentioning
confidence: 99%
“…In conclusion, we believe that during a self-initiated fall, subjects block external inputs irrelevant for the control of the imminent contact between feet and floor, and prime a top-down or goal-directed control over potentially disturbing sensory inputs such as the SAS, which might act as a distractor (Kawahara 2007). In our paradigm, we consider that the task, learned, is already sent feedforward in a top-down manner.…”
Section: Discussionmentioning
confidence: 99%
“…Contextual cueing is not restricted to situations in which the spatial layout of distractors is the repeated aspect of the context. Repetition of the shapes of the distractors, the semantic category of distractor words, auditory cues, temporal sequences, the motion trajectories of distractors, background color or texture, and background scenes all yield an RT advantage Chun & Jiang, 1999;Endo & Takeda, 2004;Goujon, Brockmole, & Ehinger, 2012;Goujon, Didierjean, & Marmèche, 2009;Kawahara, 2007;Kunar, Flusberg, & Wolfe, 2006;Makovski, Vázquez, & Jiang, 2008;Summerfield, Lepsien, Gitelman, Mesulam, & Nobre, 2006). These different aspects of repeated contexts may interact.…”
Section: Contextual Cueingmentioning
confidence: 99%