The spread of AI-embedded systems involved in human decision making makes studying human trust in these systems critical. However, empirically investigating trust is challenging. One reason is the lack of standard protocols to design trust experiments. In this paper, we present a survey of existing methods to empirically investigate trust in AI-assisted decision making and analyse the corpus along the constitutive elements of an experimental protocol. We find that the definition of trust is not commonly integrated in experimental protocols, which can lead to findings that are overclaimed or are hard to interpret and compare across studies. Drawing from empirical practices in social and cognitive studies on human-human trust, we provide practical guidelines to improve the methodology of studying Human-AI trust in decision-making contexts. In addition, we bring forward research opportunities of two types: one focusing on further investigation regarding trust methodologies and the other on factors that impact Human-AI trust.CCS Concepts: • Human-centered computing → HCI theory, concepts and models.
Trust has become a priority when designing and deploying AI-embedded systems alongside with other Human-Centered AI values, i.e. explainability, transparency, and fairness. However, due to their multifaceted and multidisciplinary nature, these terms can have various context-dependent meanings. Thus, translating these values into design can be a challenge [6]. Trust is not an exception.Understanding what Human-AI trust is and what factors affect it comes largely from controlled lab experiments or studies with prototypes of AI-embedded systems [3,7]. However, little is known about how Human-AI trust is addressed in development and deployment of real-world AI products and services. AI practitioners, people involved in different aspects of system design and deployment in the field, with roles ranging from AI developers to project managers and policy makers, can shed light on the role of Human-AI trust and what Human-AI trust factors are considered in real organizational settings. Their insights can better detail the needs, challenges, and experiences of different stakeholders when it comes to Human-AI trust.In this work-in-progress paper, we study how Human-AI trust is addressed in development and deployment of real AI systems. We conduct a series of interviews with AI practitioners who develop and deploy AI-embedded decision support systems in various risk-sensitive contexts (finance, law, management). We specifically focus on these systems, because human trust in AI is especially pertinent for them due to their potential societal impact. The interviews are part of a bigger project around AI practitioners' experiences with Human-AI trust, but in this working paper we report the preliminary findings from the first 5 interviewees (see Table 1). Specifically, we present our preliminary analysis of participants' replies to the questions regarding the role of Human-AI trust in their practices and what factors are considered when establishing it in the context of AI-assisted decision making.For the results' analysis, two independent reviewers read all the interviews at least two times and independently identified phrases of interests and codes for them, following the thematic analysis approach [1]. Together, they compared and finalized the list of selected phrases, and fine-tuned codes' formulation. By grouping the codes, the reviewers identified three major themes: 1) the role of Human-AI trust in developing and designing AI-embedded decision support systems, 2) importance of Human-AI trust in AI practitioners' work, and 3) what factors AI practitioners believe contribute to establishing trust in their systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.