2011
DOI: 10.1037/a0023101
|View full text |Cite
|
Sign up to set email alerts
|

Attentional capture of objects referred to by spoken language.

Abstract: Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

8
39
0
1

Year Published

2011
2011
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(48 citation statements)
references
References 48 publications
(58 reference statements)
8
39
0
1
Order By: Relevance
“…We hypothesised that valid cues would reduce detection times relative to control and invalid cue conditions, consistent with 1612 D. T. Bishop et al findings from other studies in which auditory cues were used (e.g., Salverda & Altmann, 2011). Despite skilled team sport players' ability to allocate visual attention highly effectively (Enns & Richards, 1997;Nougier et al, 1989), we predicted that skilled netballers would be prone to auditory attentional capture effects -consistent with the notion that skilled team sport athletes do not differ from non-athletes on fundamental measures of perception and attention (Abernethy, Neal, & Koning, 1994;Hughes, Blundell, & Walters, 1993;Ward, Williams, & Loran, 2000;Williams & Grant, 1999).…”
Section: Introductionsupporting
confidence: 83%
See 1 more Smart Citation
“…We hypothesised that valid cues would reduce detection times relative to control and invalid cue conditions, consistent with 1612 D. T. Bishop et al findings from other studies in which auditory cues were used (e.g., Salverda & Altmann, 2011). Despite skilled team sport players' ability to allocate visual attention highly effectively (Enns & Richards, 1997;Nougier et al, 1989), we predicted that skilled netballers would be prone to auditory attentional capture effects -consistent with the notion that skilled team sport athletes do not differ from non-athletes on fundamental measures of perception and attention (Abernethy, Neal, & Koning, 1994;Hughes, Blundell, & Walters, 1993;Ward, Williams, & Loran, 2000;Williams & Grant, 1999).…”
Section: Introductionsupporting
confidence: 83%
“…The authors showed that this effect could also be replicated with primes that comprised only a verbal description of a shape; hence, they concluded that the working memory (WM) of verbal information was able to bias participants' visual attention. Salverda and Altmann (2011) examined this notion further by investigating the effects of spoken cues on target detection performance. In two experiments, participants had to generate a saccade to the target, after hearing a word that referred to the target object or to a distractor.…”
Section: Introductionmentioning
confidence: 99%
“…Altmann and Kamide (2007), for example, proposed that, prior to auditory instructions, participants' inspection of the visual array leads to pre-activation of conceptual features of the displayed objects, leaving conceptually enriched episodic traces associated with each object. As the verbal instructions unfold, the conceptual features activated by the verbal input make contact with the features pre-activated from the visual array and effectively re-activate these episodic traces, which then leads to a shift in visual attention such that participants are more prone to make a saccadic eye movement towards the object with these features (see also Salverda & Altmann, 2011). Along these lines, the greater fixation proportions we observed to both structure-based and function-based distracters relative to the unrelated items could be thought of as reflections of overlapping action features that are incidentally activated by the target images, distracter images and the spoken words.…”
Section: Methodsmentioning
confidence: 99%
“…When the goal is less articulated, for instance in “look-and-listen” studies in which participants are not given an explicit task, we can assume that these routines might also be activated and thus compete for control of saccades, much as high saliency areas might attract saccades in viewing a scene without a explicit task. Thus, the goal-based linking hypothesis can accommodate evidence for basic effects of visual-linguistic integration that we might consider to be “automatic” (see Salverda & Altmann, in revision, for evidence that task-irrelevant named objects can capture attention). However, according to the goal-based view, these automatic effects arise from routines that constitute the bottom level of a hierarchically organized goal structure.…”
Section: Linking Hypotheses Between Language Processing and The Visuamentioning
confidence: 99%
“…Perhaps the simplest effects occur when a listener fixates an object upon hearing its name (see Salverda & Altmann, in revision, for a study suggesting that such effects are automatic, i.e., at least partially independent of task). In addition, linguistic material that is relevant for reference resolution but that is not inherently referential, such as verbs, prepositions, and scalar adjectives, has immediate effects on fixations to potential upcoming referents (e.g., Altmann & Kamide, 1999; Chambers, Tanenhaus, Eberhard, Filip, & Carlson, 2002; Eberhard, Spivey-Knowlton, Sedivy, & Tanenhaus, 1995).…”
Section: Linking Hypotheses Between Language Processing and The Visuamentioning
confidence: 99%