The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1038/s42256-021-00338-7
|View full text |Cite
|
Sign up to set email alerts
|

AI for radiographic COVID-19 detection selects shortcuts over signal

Abstract: Artificial intelligence (AI) researchers and radiologists have recently reported AI systems that accurately detect COVID-19 in chest radiographs. However, the robustness of these systems remains unclear. Using state-of-the-art techniques in explainable AI, we demonstrate that recent deep learning systems to detect COVID-19 from chest radiographs rely on confounding factors rather than medical pathology, creating an alarming situation in which the systems appear accurate, but fail when tested in new hospitals. … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
120
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 289 publications
(173 citation statements)
references
References 43 publications
3
120
0
1
Order By: Relevance
“…The problem with lacking insight into model decisions appears when a model learns to predict accurately based on irrelevant features and thus generalizes poorly to other datasets. A recent study, for instance, reported that an accurate AI model trained to identify COVID-19 in chest radiographs actually failed to make use of the relevant information in the images [157]. Due to a consistent patient positioning during imaging, the model instead recognized COVID-19-positive patients based on their shoulder regions.…”
Section: Explaining Decisions Made By Ai Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…The problem with lacking insight into model decisions appears when a model learns to predict accurately based on irrelevant features and thus generalizes poorly to other datasets. A recent study, for instance, reported that an accurate AI model trained to identify COVID-19 in chest radiographs actually failed to make use of the relevant information in the images [157]. Due to a consistent patient positioning during imaging, the model instead recognized COVID-19-positive patients based on their shoulder regions.…”
Section: Explaining Decisions Made By Ai Modelsmentioning
confidence: 99%
“…Moreover, by knowing the decision pattern, the clinician would be able to assess faithfulness of the prediction. For this reason, explainability is suggested to be an ethical requirement for future clinical decision systems [157]. Here, we introduce different explainable AI methods and give a brief overview of applications in medicine and analysis of molecular pathways.…”
Section: Explain Predictionmentioning
confidence: 99%
“… 16 Learning these shortcuts instead of the underlying nature of the problem is a topic of concern in the field. 17 It is then comprehensible for many machine learning methods to spark criticism regarding the difficulty to understand the rationale behind their predictions. It has been questioned whether a pharmaceutical company would promote a given molecule into a portfolio based only on an opaque prediction made by a neural network, without any clear explanation to support it.…”
Section: Introductionmentioning
confidence: 99%
“…However, the field has inspired controversy. DeGrave et al [ 8 ] demonstrated that combining data from multiple sources, in particular where data from different classes have different acquisition and pre-processing parameters, led to a significant bias that artificially improved the measured performance in many studies. Garcia Santa Cruz et al [ 9 ] presented a review of public CXR datasets, concluding that the most popular datasets used in the literature were at a high risk of introducing bias into reported results.…”
Section: Introductionmentioning
confidence: 99%