2021
DOI: 10.2147/opth.s312236
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis

Abstract: Background The lack of explanations for the decisions made by deep learning algorithms has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Attribution methods explaining deep learning models have been tested on medical imaging problems. The performance of various attribution methods has been compared for models trained on standard machine learning datasets but not on medical images. In this study, we performed a comparative analysis to dete… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 26 publications
(44 reference statements)
0
12
0
Order By: Relevance
“…Furthermore, classification is not limited to the DR detection and DCNNs can be applied to detect the presence of DR-related lesions such as the study reported by Wang et al They cover twelve lesions in their study: MA, IHE, superficial retinal hemorrhages (SRH), Ex, CWS, venous abnormalities (VAN), IRMA, NV at the disc (NVD), NV elsewhere (NVE), pre-retinal FIP, VPHE, and tractional retinal detachment (TRD) with average precision and AUC 0.67 and 0.95, respectively; however, features such as VAN have low individual detection accuracy. This study provides essential steps for DR detection based on the presence of lesions that could be more interpretable than DCNNs which act as black boxes [ 86 , 87 , 88 ].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, classification is not limited to the DR detection and DCNNs can be applied to detect the presence of DR-related lesions such as the study reported by Wang et al They cover twelve lesions in their study: MA, IHE, superficial retinal hemorrhages (SRH), Ex, CWS, venous abnormalities (VAN), IRMA, NV at the disc (NVD), NV elsewhere (NVE), pre-retinal FIP, VPHE, and tractional retinal detachment (TRD) with average precision and AUC 0.67 and 0.95, respectively; however, features such as VAN have low individual detection accuracy. This study provides essential steps for DR detection based on the presence of lesions that could be more interpretable than DCNNs which act as black boxes [ 86 , 87 , 88 ].…”
Section: Resultsmentioning
confidence: 99%
“…designed a DL model named Trilogy of Skip-connection Deep Networks (Tri-SDN) over the pretrained base model ResNet50 that applies skip connection blocks to make the tuning faster yielding to ACC and SP of 90.6 % and 82.1 % respectively, which is considerably higher than the values of 83.3 % and 64.1 % compared with skip connection blocks are not used. Furthermore, classification is not limited to the DR detection and DCNNs can be applied to detect the presence of DR-related lesions such as that reported by Wang et al 2020 covering twelve lesions: MA, IHE, superficial retinal hemorrhages (SRH), Ex , CWS, venous abnormalities (VAN), IRMA, NV at the disc (NVD), NV elsewhere (NVE), pre-retinal FIP, VPHE, and tractional retinal detachment (TRD) with average precision and AUC 0.67 and 0.95 respectively, however features such as VAN has low individual detection accuracy.This study provides essential steps for DR detection based on the presence of lesions that is more interpretable than DCNNs which act as black boxes[86,87,88].There are explainable backpropagation-based methods that produce heatmaps of the lesions affecting the classifications DR such as the study done by Keel et al 2019[89] which highlights Ex, HE and vascular abnormalities in the DR diagnosed images. These methods have limited performance providing generic explanations which might be inadequate to be clinically reliable.…”
mentioning
confidence: 99%
“…However other areas of medicine, for example ophthalmology has shown that certain classifiers approach clinician level performance. Of further importance is the development of explainable AI methods which have been applied to ophthalmology where correlations are made between areas of the image that the clinician uses to make decisions and the ones used by the algorithms to arrive at the result (i.e., the portions of the image which most heavily weighs the neural connexons) [71,[185][186][187].…”
Section: Discussionmentioning
confidence: 99%
“…Among XAI methods applied to these AI systems, most employ attribution-based methods to generate posthoc local heatmaps to represent regions of the input image that contribute most to output decision [33,34 ▪ ,35]. By visualizing that attention areas of DL models correlate to clinically relevant features, these visualization heatmaps can potentially boost clinicians’ confidence in model output decisions [17 ▪▪ ,36]. Another example is the use of occlusion testing (a perturbation-based XAI method) to repeatedly mask 100 × 100 pixel areas within CFP, where masking of clinically-important lesions corresponded to reduction in confidence of DL-predictions in AMD severity classification [10].…”
Section: Clinical Applications Of Explainable Artificial Intelligence...mentioning
confidence: 99%
“…XAI can also aid clinicians in clinical diagnosis and management. Posthoc XAI methods, that highlight important sites in the input image influencing output decision, can serve as a diagnostic aid for clinicians by facilitating closer inspection of these areas for clinically relevant features [36]. In a multiclass DL model for refractive surgery choice, importance ranking of input variables (142 measurements from Pentacam and clinical examination) using SHaP corresponded to clinically-relevant factors, to guide ophthalmologists in selecting the most appropriate type of refractive surgery [38].…”
Section: Clinical Applications Of Explainable Artificial Intelligence...mentioning
confidence: 99%