2021
DOI: 10.1093/jamia/ocab238
|View full text |Cite
|
Sign up to set email alerts
|

Trust in AI: why we should be designing for APPROPRIATE reliance

Abstract: Use of artificial intelligence in healthcare, such as machine learning-based predictive algorithms, holds promise for advancing outcomes, but few systems are used in routine clinical practice. Trust has been cited as an important challenge to meaningful use of artificial intelligence in clinical practice. Artificial intelligence systems often involve automating cognitively challenging tasks. Therefore, previous literature on trust in automation may hold important lessons for artificial intelligence application… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(29 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…How did the endoscopists select and follow the best ai advice? The successful extraction of two critical task parameters was likely at the core of this ability: endoscopists could intuitively but reliably predict for each lesion both their accuracy (not obvious 37 , 38 ) and the accuracy of the ai (not obvious 39 ). Furthermore and importantly, these prediction estimates affected endoscopists’ decisions so that they switched their diagnosis towards the ai opinion more when their confidence was low and ai perceived confidence was high.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…How did the endoscopists select and follow the best ai advice? The successful extraction of two critical task parameters was likely at the core of this ability: endoscopists could intuitively but reliably predict for each lesion both their accuracy (not obvious 37 , 38 ) and the accuracy of the ai (not obvious 39 ). Furthermore and importantly, these prediction estimates affected endoscopists’ decisions so that they switched their diagnosis towards the ai opinion more when their confidence was low and ai perceived confidence was high.…”
Section: Discussionmentioning
confidence: 99%
“…Three pitfalls undermine the beneficial effects of human– ai interaction. The first two, over-reliance or under-reliance on ai , regard a general attitude towards support systems, which is wrong when decoupled from considerations on the relative informativeness of the ai 39 , 41 , 42 . The third pitfall is more subtle and pervasive: opaque reliability of ai or human judgments, i.e., the md might not know how much s/he can trust her own, or the ai ’s, judgment in each specific medical problem.…”
Section: Discussionmentioning
confidence: 99%
“…While the intention is to make AI systems' user interfaces more effective and easier to use, perceptions about the unreliability of AI and that its interface is merely cosmetic has created disagreements about how to design guidelines and principles for AI user interface alternatives. Extending our design knowledge, for example, in the form of design principles and theories, is a must (Benda et al, 2022).…”
Section: Ai User Interface Designmentioning
confidence: 99%
“…People have different inclinations to trust, known as dispositional trust [27,51,52]. Culture, age, attachment styles, and other personal differences all count towards this dispositional trust [27,53].…”
Section: Personal Differences In How Clinicians Trustmentioning
confidence: 99%
“…Clinicians may have positive expectations (or lack thereof) in clinical AI, informed by their past experiences, culture, expertise, gossip, relevant industry news, mental models, affect etc. [31,51,55]. These differences are rarely considered when investigating trust in clinical AI, despite having important implications for how to calibrate trust.…”
Section: Personal Differences In How Clinicians Trustmentioning
confidence: 99%