2019
DOI: 10.1016/j.trf.2019.06.011
|View full text |Cite
|
Sign up to set email alerts
|

The comparison of auditory, tactile, and multimodal warnings for the effective communication of unexpected events during an automated driving scenario

Abstract: Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 39 publications
(17 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…The communication of potential or actual hazards has been tested using diverse strategies, for example using visual (Wiegand et al, 2019), audible (Wong et al, 2019), haptic (Ma et al, 2019), olfactory (Wintersberger et al, 2019) and multimodal (Geitner et al, 2019) interfaces. Recent technological developments permit the construction of interfaces featuring augmented reality (AR).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The communication of potential or actual hazards has been tested using diverse strategies, for example using visual (Wiegand et al, 2019), audible (Wong et al, 2019), haptic (Ma et al, 2019), olfactory (Wintersberger et al, 2019) and multimodal (Geitner et al, 2019) interfaces. Recent technological developments permit the construction of interfaces featuring augmented reality (AR).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Moreover, interactions with visual or audio-visual displays are more efficient than those with auditory displays only [36]. In this sense, research on multimodal perception is particularly relevant when studying human factors of driver aid systems [37,38].…”
Section: Human Factors and Their Limitsmentioning
confidence: 99%
“…The theoretical basis of user cognition is mainly the theory of limited resources and graphic perception, which expresses the explicit resources and implicit cognition of VR system information representation. Due to the limited capacity of the user's cognitive resources, it is necessary to reduce a user's cognitive load through multiple channels during information identification [6,8,9], thus improving the cognitive efficiency of a user's experience and task operation scenarios. Therefore, this paper selects visual channels, auditory channels and tactile channels to study a user's cognitive behaviors and design resource characteristics.…”
Section: Channel Theory Of Cognitive Resourcesmentioning
confidence: 99%
“…Lei Xiao [8] et al summarized the use of tactile clues to interact with other sensory stimuli to predict potential perceptual experiences in multi-sensory environments. Geitner Claudia [9] and others extended the research on multimodal warning performances. The above research shows that the user's information perception ability in multi-channel is greater than that in single channel, so this paper divides the information input in VR system into three channels: visual, auditory and tactile.…”
Section: Introductionmentioning
confidence: 99%