2021
DOI: 10.1145/3476068
|View full text |Cite
|
Sign up to set email alerts
|

How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies

Abstract: The spread of AI-embedded systems involved in human decision making makes studying human trust in these systems critical. However, empirically investigating trust is challenging. One reason is the lack of standard protocols to design trust experiments. In this paper, we present a survey of existing methods to empirically investigate trust in AI-assisted decision making and analyse the corpus along the constitutive elements of an experimental protocol. We find that the definition of trust is not commonly integr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
36
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 85 publications
(42 citation statements)
references
References 238 publications
0
36
1
Order By: Relevance
“…It provides an alternative explanation to the observations that adding transparency features often increase people's trust even if the model should not be relied upon [3,59,66,70]: they may have enhanced people's intention and process based trust rather than ability based trust. Future research should further unpack the dimensions of trustworthy AI and their relations with conceptually relevant constructs, especially behavioral outcomes such as reliance and compliance [65].…”
Section: Discussion: Towards Responsible Trust In Aimentioning
confidence: 99%
See 1 more Smart Citation
“…It provides an alternative explanation to the observations that adding transparency features often increase people's trust even if the model should not be relied upon [3,59,66,70]: they may have enhanced people's intention and process based trust rather than ability based trust. Future research should further unpack the dimensions of trustworthy AI and their relations with conceptually relevant constructs, especially behavioral outcomes such as reliance and compliance [65].…”
Section: Discussion: Towards Responsible Trust In Aimentioning
confidence: 99%
“…The current academic and public discourses are predominantly structured around the guiding principles towards trustworthy AI [62,64], often as a way to operationalize principles for responsible and ethical AI [44], such as ensuring effectiveness, fairness, transparency, robustness, privacy, security, and serving human values. These principles are inherently technocentric, focusing on what constitutes the trustworthiness of AI, when in fact trust is a human judgment or attitude, which can be formally defined as a judgment of dependability in situations characterized by vulnerability [34,65]. The same AI technology can be judged differently by different people, with some forming inaccurate trust judgments.…”
Section: Introductionmentioning
confidence: 99%
“…However, little is known about what definitions, theories, and models of trust have been used and for what AIinfused systems. Recent literature reviews have started to trace out this body of work [16,52]. Glikson and Woolley [16] reviewed empirical studies on trust in all forms of AI, examining the factors that make up human trust in AI, but not how to assess these factors.…”
Section: Introductionmentioning
confidence: 99%
“…Glikson and Woolley [16] reviewed empirical studies on trust in all forms of AI, examining the factors that make up human trust in AI, but not how to assess these factors. Vereschak et al [52] focused on the context of AI decision support, reviewing methods and providing practical guidelines for studying trust between humans and AI. What is missing is a general perspective that includes models and measures regardless of system type and context.…”
Section: Introductionmentioning
confidence: 99%
“…Artificial Intelligence (AI)-based systems 1 are supporting humans in an increasing number of decisions and in a multitude of society-impacting applications [12,82]. As building trust in such systems is deemed critical for their adoption and appropriate use [35,42], much attention in recent research in computer science, human-computer interactions (HCI), and explainable AI (XAI) has been given to the development of trustworthy AI guidelines [38,80], and methods to foster and evaluate trust in human-AI interactions [36,68,83].…”
Section: Introductionmentioning
confidence: 99%