Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications 2018
DOI: 10.1145/3239092.3265948
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Trust Calibration for Supervised Autonomous Vehicles

Abstract: Poor trust calibration in autonomous vehicles often degrades total system performance in safety or efficiency. Existing studies have primarily examined the importance of system transparency of autonomous systems to maintain proper trust calibration, with little emphasis on how to detect over-trust and under-trust nor how to recover from them. With the goal of addressing these research gaps, we first provide a framework to detect a calibration status on the basis of the user's behavior of reliance. We then prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…It has been suggested that this overtrust could lead to underestimation regarding likelihood of an accident in partially automated cars, or even fostering reliance on systems that have shown themselves to be faulty (Wagner, Borenstein, & Howard, 2018). Although there has been recent work attempting to develop an “adaptive trust calibration” system, which would help with issues of overtrust (Okamura & Yamada, 2018), such a system does not yet exist in partially automated vehicles.…”
Section: Introductionmentioning
confidence: 99%
“…It has been suggested that this overtrust could lead to underestimation regarding likelihood of an accident in partially automated cars, or even fostering reliance on systems that have shown themselves to be faulty (Wagner, Borenstein, & Howard, 2018). Although there has been recent work attempting to develop an “adaptive trust calibration” system, which would help with issues of overtrust (Okamura & Yamada, 2018), such a system does not yet exist in partially automated vehicles.…”
Section: Introductionmentioning
confidence: 99%
“…P man is a user's self estimation of P man , which corresponds to the user's self-confidence. These two parameters were not clearly distinguished in [8]. Several studies [13,23,26] have demonstrated that reliance behavior can be explained by the relationship between a user's trust in the agents and the user's self-confidence.…”
Section: Our Approach: Adaptive Trust Calibrationmentioning
confidence: 93%
“…To our knowledge, most of the existing studies on trust calibration have been about how to prevent inappropriate calibration status; very few have investigated how to mitigate it. The initial idea of the proposed method was discussed in the work [8]. In the current study, we defined a framework and conducted an online experiment with a web-based drone simulator to evaluate the effectiveness of our method in an over-trust scenario.…”
Section: Introductionmentioning
confidence: 99%
“…Over-trust is an instance of miscalibrated trust and refers to instances where the user perceives the system to be more trustworthy than is warranted by actual system performance (Figure 1). Over-trust in a system means user complacency, resulting in not noticing automation failures, especially when they are “silent” failures (de Visser et al, 2020; Kraus et al, 2019; Louw et al, 2019; Okamura & Yamada, 2018, 2020; Parasuraman & Manzey, 2010; Ruscio et al, 2015) and can predict the misuse of a system (Bahner et al, 2008; Parasuraman & Manzey, 2010; Robinette et al, 2016; Wagner et al, 2018a). This complacency can lead to not only failures of monitoring but also decision biases (Bahner et al, 2008; V. A.; Banks et al, 2018; Merritt et al, 2015a; Parasuraman & Manzey, 2010; Ruscio et al, 2015).…”
Section: The Case For Trust Assessment In Human-machine Interactionmentioning
confidence: 99%