2021
DOI: 10.1016/j.trf.2021.03.012
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the benefits of conversing with a digital voice assistant during automated driving: A parametric duration model of takeover time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(11 citation statements)
references
References 72 publications
(111 reference statements)
0
7
0
Order By: Relevance
“…A 6-second sequence of glances was used by Fridman et al [108] to predict the driver state. Likewise, Mahajan et al [240] analyzed glance behavior (e.g., frequency) regarding situation awareness before takeover requests (TORs). Besides, longer eye movement sequences can be used, which we found to be mainly implicit input for predictive algorithms.…”
Section: Input Modalitiesmentioning
confidence: 99%
See 2 more Smart Citations
“…A 6-second sequence of glances was used by Fridman et al [108] to predict the driver state. Likewise, Mahajan et al [240] analyzed glance behavior (e.g., frequency) regarding situation awareness before takeover requests (TORs). Besides, longer eye movement sequences can be used, which we found to be mainly implicit input for predictive algorithms.…”
Section: Input Modalitiesmentioning
confidence: 99%
“…For example, Benedetto et al [32] measured pupil diameter, blink rate, and blink duration to assess the driver's mental workload and inform accident prevention systems. Likewise, Mahajan et al [240] measured pupil diameter and blink frequency to assess the changed driver alertness due to automation and the TOR performance. However, we did not find any driver assistance systems already built into vehicles, such as drowsiness detection using visual input, which could be due to the set time range (2011-2021).…”
Section: Input Modalitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…As errors in these situations may directly impact the resulting decisions, this could lead to subsequent errors of high impact. Already the interpretation and analysis, as well as the direct control of human-machine interactions, in economic, healthcare, or transportation relevant environments is dependent on smart processes based on machine-learning and cognitive architectures (Dojchinovski et al, 2019;Gruzauskas et al, 2020;Mahajan et al, 2021;Valaskova et al, 2021;Sri Suvetha et al, 2022). As these methods are employed to reduce the possibilities and impact of economic crisis, an erroneous implementation could be devastating.…”
Section: Introductionmentioning
confidence: 99%
“…Outputting facial expressions and speech is a fundamental capability of in-vehicle robots. Several studies have shown that robot facial expressions and speech have additional positive effects on driving safety [ 8 , 9 ]. Many studies have shown that multimodal warnings in cars are more beneficial for driving safety than unimodal warnings [ 10 , 11 ], both for manual [ 11 , 12 ] and highly automated [ 13 ] vehicles.…”
Section: Introductionmentioning
confidence: 99%