2022
DOI: 10.48550/arxiv.2204.13828
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Designing for Responsible Trust in AI Systems: A Communication Perspective

Q. Vera Liao,
S. Shyam Sundar

Abstract: Current literature and public discourse on "trust in AI" are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust. Given that AI systems differ in their level of trustworthiness, two open questions come to the fore: how should AI trustworthiness be responsibly communicated to ensure appropriate and equitable trust judgments by different users, and how can we protect users from deceptive attempts to earn their trust? We draw from communication t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Similarly, consideration of situational factors of past interactions with others in the design of human-centered AI systems has been emphasized [10]. Understanding these human-centered factors is crucial for enhancing user trust and adoption, and ensuring successful integration of AI systems [11,26,27].…”
Section: User-centered Ai Systems In Hcimentioning
confidence: 99%
“…Similarly, consideration of situational factors of past interactions with others in the design of human-centered AI systems has been emphasized [10]. Understanding these human-centered factors is crucial for enhancing user trust and adoption, and ensuring successful integration of AI systems [11,26,27].…”
Section: User-centered Ai Systems In Hcimentioning
confidence: 99%
“…Researchers have identified a large set of factors that may influence people's reliance on AI, including the AI model's accuracy (Yin, Wortman Vaughan, and Wallach 2019;Lai and Tan 2019), confidence (Zhang, Liao, and Bellamy 2020;Rechkemmer and Yin 2022), the type of AI explanations and the ways that they are presented (Yang et al 2020;Bansal et al 2021b), humans' mental models about AI (Bansal et al 2019a,b), and the level of humanmodel agreement (Lu and Yin 2021). It was shown in many experimental studies that decision makers often can not rely on AI models appropriately, which leads to new studies on designing innovative methods to promote appropriate reliance on AI (Buçinca, Malaya, and Gajos 2021;Park et al 2019;Liao and Sundar 2022;Chiang and Yin 2022).…”
Section: Related Workmentioning
confidence: 99%