Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.48550/arxiv.2003.10769
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Estimating Uncertainty and Interpretability in Deep Learning for Coronavirus (COVID-19) Detection

Biraja Ghoshal,
Allan Tucker

Abstract: Deep Learning has achieved state of the art performance in medical imaging. However, these methods for disease detection focus exclusively on improving the accuracy of classification or predictions without quantifying uncertainty in a decision. Knowing how much confidence there is in a computer-based medical diagnosis is essential for gaining clinicians' trust in the technology and therefore improve treatment. Today, the 2019 Coronavirus (COVID-19) infections are a major healthcare challenge around the world. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
102
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 83 publications
(102 citation statements)
references
References 11 publications
(15 reference statements)
0
102
0
Order By: Relevance
“…This black-box nature of ViTs has hindered their deployment in clinical practice since, in areas such as medical applications, it is imperative to identify the limitations and potential failure cases of designed systems, where interpretability plays a fundamental role [393]. Although several explainable AIbased medical imaging systems have been developed to gain deeper insights into the working of CNNs models for clinical applications [394]- [396], however, the work is still in its infancy for ViT-based medical imaging applications.…”
Section: Interpretabilitymentioning
confidence: 99%
“…This black-box nature of ViTs has hindered their deployment in clinical practice since, in areas such as medical applications, it is imperative to identify the limitations and potential failure cases of designed systems, where interpretability plays a fundamental role [393]. Although several explainable AIbased medical imaging systems have been developed to gain deeper insights into the working of CNNs models for clinical applications [394]- [396], however, the work is still in its infancy for ViT-based medical imaging applications.…”
Section: Interpretabilitymentioning
confidence: 99%
“…A suggested "Corona score" is used to track how patients progress over time. Ghoshal and Tucker (2020) attempted to determine how Bayesian Convolutional Neural Networks (Dropweights based) Networks (BCNN) can calculate uncertainty in DL resolutions to improve the diagnostic accuracy of the human-machine combination using COVID-19 chest X-ray in another paper. Their primary goal is to avoid false-negative detection.…”
Section: Artificial Neural Network In Covid-19mentioning
confidence: 99%
“…Only a handful of works address the uncertainty of COVID detection methods (Mallick et al 2020;Ghoshal and Tucker 2020). Mallick et.…”
Section: Related Workmentioning
confidence: 99%