2015
DOI: 10.1080/00140139.2015.1099742
|View full text |Cite
|
Sign up to set email alerts
|

Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search

Abstract: In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
71
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
1

Relationship

5
4

Authors

Journals

citations
Cited by 45 publications
(73 citation statements)
references
References 62 publications
1
71
0
Order By: Relevance
“…More generally, the present findings suggest that pupil sizes could provide a general metric to assess attentional load in attention demanding visuospatial tasks (e.g., in visual search [48, 49] or visuomotor tasks [50]) without necessarily taking the task performance into account. Findings could also be applicable to demanding real world tasks (e.g., air-traffic control, driving a car, or flying an airplane) to assess the current attentional load during task performance.…”
Section: Discussionmentioning
confidence: 80%
“…More generally, the present findings suggest that pupil sizes could provide a general metric to assess attentional load in attention demanding visuospatial tasks (e.g., in visual search [48, 49] or visuomotor tasks [50]) without necessarily taking the task performance into account. Findings could also be applicable to demanding real world tasks (e.g., air-traffic control, driving a car, or flying an airplane) to assess the current attentional load during task performance.…”
Section: Discussionmentioning
confidence: 80%
“…For instance, when synchronizing actions, co-actors divide attention between locations relevant for their own and for their co-actor’s goal (Kourtis et al, 2014; see Böckler et al, 2012; Ciardo et al, 2016 for similar results using different tasks), and sharing gaze affects object processing by making attended objects motorically and emotionally more relevant (Becchio et al, 2008; Innocenti et al, 2012; Scorolli et al, 2014). Moreover, in a joint search task, co-actors who mutually received information about each other’s gaze location via different sensory modalities (i.e., vision, audition, and touch) searched faster than without such information (Brennan et al, 2008; Wahn et al, 2015). Together, these findings demonstrate the important role of gaze information for joint action.…”
Section: Sharing Sensorimotor Informationmentioning
confidence: 99%
“…Arrighi et al found that the MOT task selectively interfered with the visual discrimination task while the auditory discrimination performance was not affected, suggesting distinct attentional resources for the visual and auditory modalities. Relatedly, in other recent studies (Wahn & König, 2016; Wahn et al, 2015), participants performed a visual search task (i.e., a task in which participants needed to discriminate targets from distractors) and either a tactile or visual localization task at the same time. The localization task interfered with the visual search task, regardless of whether the localization task was performed in the tactile or visual sensory modality.…”
Section: Combining Object-based and Spatial Attention Tasksmentioning
confidence: 99%