2014
DOI: 10.1117/12.2050622
|View full text |Cite
|
Sign up to set email alerts
|

Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems

Abstract: Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator's ability to understand a robot's behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maint… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
57
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(57 citation statements)
references
References 57 publications
(66 reference statements)
0
57
0
Order By: Relevance
“…We use Chen et al's (2014) model of agent transparency to examine the impact of agent transparency information on operators' trust in the autonomous robotic agent, SA of the agent's display, and workload while using the system. The use of SAT levels allows us to examine differing amounts of agent transparency on a systematic level, starting with trust.…”
Section: Discussionmentioning
confidence: 99%
“…We use Chen et al's (2014) model of agent transparency to examine the impact of agent transparency information on operators' trust in the autonomous robotic agent, SA of the agent's display, and workload while using the system. The use of SAT levels allows us to examine differing amounts of agent transparency on a systematic level, starting with trust.…”
Section: Discussionmentioning
confidence: 99%
“…robot), mental models are used to assist the user perceive and interpret the entity's intentions and actions [21]. At the same time, it must be noted that humans tend to have incomplete or even inaccurate mental models [22]. In an industrial HRC scenario humans will be requested to share the same workspace and collaborate with an industrial robot to complete a task.…”
Section: Part 1: Operator Training Programme For Initial Trust Calibrmentioning
confidence: 99%
“…Also, through exposure they will be in a position to identify factors that diminish or enhance the robot's ability to perform as well as detect cues that suggest a potential malfunction. According to [22] trust can be calibrated by providing an accurate understanding of the factors that may lead the robot to fail and the outcomes of those failures. To leverage this potential and enable effective HRC, it is proposed that operator empowerment can be a key strategy.…”
Section: Part 2: Operator Empowerment For Continuous Trust Calibrationmentioning
confidence: 99%
“…Recently there has been a marked increase in attention for other, more subtle trust cues in human-40 system interaction than outcome feedback, such as goal similarity (Verberne, Ham, & Midden, 2012) 41 and cues conveying transparency and system rationale (e.g., De Visser et al 2014; Helldin et al, 42 2013;Ososky et al, 2014; Thill, Hemeren, & Nilsson, 2014). Nevertheless, the effects on trust of 43 direct experiences in the absence of outcome feedback have, to our knowledge, not received any 44 attention in human factors research.…”
Section: Opinions and The Interaction Processmentioning
confidence: 99%
“…Information obtained from process feedback does not necessarily have 36 anything to do with actual algorithms and functions employed by the system (such as cost functions 37 used by route planners to calculate routes) but is the result of the users' information processing based 38 on the cues provided to them via a system's interface displays. 39Recently there has been a marked increase in attention for other, more subtle trust cues in human-40 system interaction than outcome feedback, such as goal similarity (Verberne, Ham, & Midden, 2012) 41 and cues conveying transparency and system rationale (e.g., De Visser et al 2014; Helldin et al, 42 2013;Ososky et al, 2014; Thill, Hemeren, & Nilsson, 2014). Nevertheless, the effects on trust of 43 direct experiences in the absence of outcome feedback have, to our knowledge, not received any 44 attention in human factors research.…”
mentioning
confidence: 99%