2015
DOI: 10.1177/0018720815598604
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Technology in the Absence of Proof

Abstract: Running head: Process feedback and system trust Document type: Extended Multi-Phase Study (4 studies) Word count: 12 969 (max. 13 500 for a 4-study article). 1 Structured abstractObjective: The present research addresses the question how trust in systems is formed when unequivocal information about system accuracy and reliability is absent, and focuses on the interaction of indirect information (others' evaluations) and direct (experiential) information stemming from the interaction process.Background: Trust i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…"I am sorry" or "I apologize") were trusted more than agents that did not [3,24]. Moreover, people are more likely to trust and rely on an automated decision-support system when given an explanation why the decision aid might err [6], or when they inferred such explanations after observing system behaviour themselves [60]. The effectiveness of a trust repair strategy seems to depend on situational factors such as timing [46], violation type [50,54] and agent type [19].…”
Section: Non-human Apologymentioning
confidence: 99%
See 1 more Smart Citation
“…"I am sorry" or "I apologize") were trusted more than agents that did not [3,24]. Moreover, people are more likely to trust and rely on an automated decision-support system when given an explanation why the decision aid might err [6], or when they inferred such explanations after observing system behaviour themselves [60]. The effectiveness of a trust repair strategy seems to depend on situational factors such as timing [46], violation type [50,54] and agent type [19].…”
Section: Non-human Apologymentioning
confidence: 99%
“…For trust to be calibrated, however, humans would need to be able to determine whether the intelligent agent is to blame for the violation or rather the situation [41]; failure to attribute a trust violation to the situation may cause an unwarranted decrease in the human's trust and reliance. Other studies have shown that people may be more forgiving when they understand why a trust violation may sometimes occur [6,60]. Considering this, optimal collaboration between humans and intelligent agents relies heavily on the agent's capacity to effectively communicate with the human, i.e., to explain why a violation has occurred, so as to remedy damaged trust.…”
Section: Introductionmentioning
confidence: 99%