2022
DOI: 10.1007/s10676-022-09649-8
|View full text |Cite
|
Sign up to set email alerts
|

Putting explainable AI in context: institutional explanations for medical AI

Abstract: There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 35 publications
0
9
0
Order By: Relevance
“…The question of whether AI algorithms may need to be more generalizable, trained on larger and more diverse datasets to be applied to broader populations, or more localized and applied narrowly remains to be addressed. In any case, AI models will have to be explainable 15 with transparent methodologies so that these questions can be studied and debated in the coming years.…”
Section: A Look Into the Future—challenges With Continuously Learning...mentioning
confidence: 99%
“…The question of whether AI algorithms may need to be more generalizable, trained on larger and more diverse datasets to be applied to broader populations, or more localized and applied narrowly remains to be addressed. In any case, AI models will have to be explainable 15 with transparent methodologies so that these questions can be studied and debated in the coming years.…”
Section: A Look Into the Future—challenges With Continuously Learning...mentioning
confidence: 99%
“…If the surgeon is not able to explain the basis on which they make their decisions, they are therefore unable to justify them. 26 That negative thoughts were moderated by positive ones in theme four may be due to the rise of AI in areas of modern life outside medical research. 20 Modern society is firmly in the information age -a shift the size of which has been likened to the Industrial Revolution.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, explainable AI can help clinicians understand how AI algorithms make decisions and how they arrive at their recommendations. 85 Regulatory approval and certification can help further establish clinician's trust in the safety and effectiveness of AI algorithms. 86 Clinicians will be more likely to trust AI algorithms that have been approved by regulatory bodies such as the FDA, and that have undergone rigorous certification processes.…”
Section: Clinical Practicementioning
confidence: 99%
“…For this, algorithms should undergo rigorous testing and validation to ensure that they perform as intended across a range of scenarios and patient populations. Moreover, explainable AI can help clinicians understand how AI algorithms make decisions and how they arrive at their recommendations 85 . Regulatory approval and certification can help further establish clinician's trust in the safety and effectiveness of AI algorithms 86 .…”
Section: Applications Of Artificial Intelligence In Heart Failurementioning
confidence: 99%