2021
DOI: 10.1136/medethics-2020-107102
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?

Abstract: We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning (ML) models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
46
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(55 citation statements)
references
References 28 publications
0
46
0
Order By: Relevance
“…[17][18][19][20][21][22][23] , the DL structures can generate better feature representations and therefore are able to improve the TTE predictions. However, the low interpretability of DL-based models is a major obstacle to their application in healthcare where user (patients and clinicians) trust is critical [30][31][32] . The use of ROI can improve the TTE prediction performance of the CPH model while maintain its full interpretability.…”
Section: Tte Prediction Experiments On Synthetic Datamentioning
confidence: 99%
“…[17][18][19][20][21][22][23] , the DL structures can generate better feature representations and therefore are able to improve the TTE predictions. However, the low interpretability of DL-based models is a major obstacle to their application in healthcare where user (patients and clinicians) trust is critical [30][31][32] . The use of ROI can improve the TTE prediction performance of the CPH model while maintain its full interpretability.…”
Section: Tte Prediction Experiments On Synthetic Datamentioning
confidence: 99%
“…Furthermore, another AI method similar to that of Attia et al reported similarly encouraging results 10 . However, although the results of such AI studies are promising, DNN-based AI techniques are inherently problematic in several respects, especially in relation to their lack of transparency and explainability, i.e., the 'black box' of AI 18,19 .Without the ability to know the exact features of the 12-lead ECG that are most important in a given DNN model's output, both interpretability and ethical accountability are compromised 45 . Moreover, it is effectively impossible for a clinician to identify, when critically evaluating the diagnostic output of a DNN-based AI model, the possible contribution to the result from methodological artifact or bias merely related to noise or to differing technical specifications between different ECG machines 46 .…”
Section: Comparison With Other Heart (Or Vascular) Agesmentioning
confidence: 99%
“…In addition, models in which the assessments were based on age predictions in healthy subjects will likely outperform models that were not. And finally, the use of more transparent regression models will also increase the ability of clinicians to better understand the origin of any unexpected result, and to thereafter relay it to the patient with a more convincing sense of trust and ethical accountability 45 .…”
Section: Comparison With Other Heart (Or Vascular) Agesmentioning
confidence: 99%
“…"It is very hard to trust any model ... without having transparency into how those models operate" [67]. The perception that interpretability is critical to trust is incredibly widespread [7,16,47,55,64,66,75,86]. However, progress on interpretability has been difficult to measure, as lack of a clear consensus definitions have exposed interpretability's inherent subjectivity and field-specific meanings [23,43,47,48,78].…”
Section: Introductionmentioning
confidence: 99%