2017
DOI: 10.1016/j.apergo.2017.02.023
|View full text |Cite
|
Sign up to set email alerts
|

Take-over again: Investigating multimodal and directional TORs to get the driver back into the loop

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

9
87
0
2

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 191 publications
(102 citation statements)
references
References 65 publications
9
87
0
2
Order By: Relevance
“…The surrogate reference task resembles a target recognition task, in which participants are required to identify a target item (the letter Q in this study) amid a field of distractors (the letter O) and manually select it on a touchscreen located to the right of the participant. This secondary task is commonly used in studies of this nature (Beller, Heesen, and Vollrath 2013;Hergeth et al 2016;Hsieh, Seaman, and Young 2015;Petermeijer et al 2017;Stockert, Richardson, and Lienkamp 2015). The task imposes a controllable level of cognitive load and resembles an ordinary activity like interacting with an infotainment system or smartphone.…”
Section: Figurementioning
confidence: 99%
“…The surrogate reference task resembles a target recognition task, in which participants are required to identify a target item (the letter Q in this study) amid a field of distractors (the letter O) and manually select it on a touchscreen located to the right of the participant. This secondary task is commonly used in studies of this nature (Beller, Heesen, and Vollrath 2013;Hergeth et al 2016;Hsieh, Seaman, and Young 2015;Petermeijer et al 2017;Stockert, Richardson, and Lienkamp 2015). The task imposes a controllable level of cognitive load and resembles an ordinary activity like interacting with an infotainment system or smartphone.…”
Section: Figurementioning
confidence: 99%
“…The independent variable for the betweensubjects design was the non-driving task (Video, Call, Reading). Dependent variables, as used in earlier studies [8], [10], [18], were:…”
Section: Variablesmentioning
confidence: 99%
“…For example, Gold et al [8] used a beep as take-over request, whereas Melcher et al [9] evaluated a bimodal (i.e., auditory-visual) take-over request. More recently, tactile stimuli have been shown to be effective as take-over requests [10]. Non-driving tasks, such as reading and making a phone call engage mainly the visual and auditory perceptual modalities.…”
Section: Introductionmentioning
confidence: 99%
“…Another goal of multimodal warnings is to draw the driver's attention to a visual display on which relevant information is presented if the driver's gaze is not oriented towards that direction [50]. Multimodal take-over requests have consequently been shown to be superior to unimodal ones [19,21]. However, it may be precisely these advantages of multimodal HMIs that would likely interfere with ongoing NDRTs during automated manoeuvres in which the driver is not supposed to intervene.…”
Section: Studymentioning
confidence: 99%
“…[15]). Successful human-automation cooperation requires fast and effective communication of the need for manual intervention in these cases (e.g., [16][17][18][19][20][21][22][23]). Therefore, getting the driver back into the loop as fast as possible has been the focus of a large body of research (see [24], for an overview), as this function of the in-vehicle HMI can be viewed as a key element for the safety of automated vehicles.…”
Section: 2mentioning
confidence: 99%