2019
DOI: 10.1080/21515581.2019.1579730
|View full text |Cite
|
Sign up to set email alerts
|

Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 41 publications
(23 citation statements)
references
References 40 publications
0
22
1
Order By: Relevance
“…Empirical research demonstrates the positive impact of AI transparency and explainability on trust [e.g. 48,49,50]. Experimental research undertaken in military settings indicates that when human operators and AI agents collaborate, increased transparency enhances trust [48,49].…”
Section: Transparency and Explainabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Empirical research demonstrates the positive impact of AI transparency and explainability on trust [e.g. 48,49,50]. Experimental research undertaken in military settings indicates that when human operators and AI agents collaborate, increased transparency enhances trust [48,49].…”
Section: Transparency and Explainabilitymentioning
confidence: 99%
“…48,49,50]. Experimental research undertaken in military settings indicates that when human operators and AI agents collaborate, increased transparency enhances trust [48,49]. Explanations have been shown to increase trust in the results of a product release planning tool [51].…”
Section: Transparency and Explainabilitymentioning
confidence: 99%
“…Previous research has demonstrated user's trustworthiness perceptions in robots to be an important factor (Alarcon et al, 2021). These contextspecific antecedents to trust have demonstrated predictive validity for trust and reliance across a wide range of studies in the interpersonal trust literature (Colquitt et al, 2007) and more recently have been investigated in the trust in automation literature (Calhoun et al, 2019).…”
Section: Trust Toward Automation and Hrimentioning
confidence: 99%
“…While benevolence and integrity are not directly attributable to the technology, users are able to personify technology (Nass and Moon, 2000). Systems that parallel human-like characteristics and personas (i.e., humanoid robots, intelligent agents such as Alexa) tend to have more trust than systems designed with the same capacities and purpose, but with non-anthropomorphized characteristics (Hancock et al, 2011;de Visser et al, 2017;Calhoun et al, 2019). However, there is a point where extreme similarity between a technology and human can result in a significant drop in trust levels, often referred to as the uncanny valley (Flemisch et al, 2017).…”
Section: Basis Of Trust In Interpersonal Verses Technology Automationmentioning
confidence: 99%