2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC) 2016
DOI: 10.1109/dasc.2016.7777998
|View full text |Cite
|
Sign up to set email alerts
|

Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner

Abstract: We performed a human-in-the-loop study to explore the role of transparency in engendering trust and reliance within highly automated systems. Specifically, we examined how transparency impacts trust in and reliance upon the Autonomous Constrained Flight Planner (ACFP), a critical automated system being developed as part of NASA's Reduced Crew Operations (RCO) Concept. The ACFP is designed to provide an enhanced ground operator, termed a super dispatcher, with recommended diversions for aircraft when their prim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
3
1

Relationship

2
8

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 20 publications
(28 reference statements)
0
14
0
Order By: Relevance
“…Lyons (2013) identified displays/interfaces to be one of the two means to promote system transparency (i.e., shared awareness and shared intent between the user and automation). There is now emerging research on how to build in transparency (via interfaces) for highly autonomous safety systems (such as Auto-GCAS) in a way that supports operator appropriate trust development (Chen & Barnes, 2014;Lyons, Sadler, et al, 2016;Sadler et al, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…Lyons (2013) identified displays/interfaces to be one of the two means to promote system transparency (i.e., shared awareness and shared intent between the user and automation). There is now emerging research on how to build in transparency (via interfaces) for highly autonomous safety systems (such as Auto-GCAS) in a way that supports operator appropriate trust development (Chen & Barnes, 2014;Lyons, Sadler, et al, 2016;Sadler et al, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…Other research has defined agent transparency in terms of understanding the agent's goals, actions, and reasoning, and projecting future states of the agent (Chen et al, 2018). Research (mostly in the domain of human-automation/agent interaction) has found that transparency in the forms of state awareness and projection (Chen et al, 2018;Ho et al, 2017;Mercado et al, 2016), decision rationale Sadler et al, 2016), and perceived benevolent design (Ho et al, 2017; are associated with higher trust and/or better humanagent interactions.…”
Section: Transparency and Trust In Human-robot Interactionmentioning
confidence: 99%
“…Transparency at the situation assessment stage provides teleological (why) explanations which may be incomplete, simply conveying relevant features contributing to an automation decision [29] or supplying the logic behind the decision [30] as well. Teleological explanations are both preferred by humans [31] and conform to our premise that endto-end transparency will lead to fuller prediction and more accurate decisions.…”
Section: Transparencymentioning
confidence: 99%