2022
DOI: 10.1109/tcyb.2021.3069920
|View full text |Cite
|
Sign up to set email alerts
|

Interpretability-Based Multimodal Convolutional Neural Networks for Skin Lesion Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(24 citation statements)
references
References 42 publications
0
16
0
Order By: Relevance
“…In [ 35 ], a multimodal convolutional neural network (IM-CNN) is presented, a model for the multiclass classification of dermatoscopic images and patient metadata as input for diagnosing pigmented skin lesions. The modeling was carried out on the open dataset HAM10,000 (“Human versus machine with 10,000 training images”), part of the ISIC Melanoma Project open database, and consists of seven diagnostic categories.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In [ 35 ], a multimodal convolutional neural network (IM-CNN) is presented, a model for the multiclass classification of dermatoscopic images and patient metadata as input for diagnosing pigmented skin lesions. The modeling was carried out on the open dataset HAM10,000 (“Human versus machine with 10,000 training images”), part of the ISIC Melanoma Project open database, and consists of seven diagnostic categories.…”
Section: Discussionmentioning
confidence: 99%
“…In database dermatology, heterogeneous data mining makes it possible to combine patient statistical metadata and dermoscopic images, greatly improving the recognition of pigmented skin lesions. The use of multimodal neural network systems [ 34 , 35 , 36 , 37 ], as well as methods for combining metadata and multidimensional visual data [ 38 ], has significantly improved the accuracy in recognizing pigmented skin lesions.…”
Section: Introductionmentioning
confidence: 99%
“…This can be explained by the fact that the model can learn some spurious correlations, causing interpretability methods to give exaggerated importance to those spurious regions highlighted on the produced saliency maps. In the same context, Wang et al [141] proposed a multimodal CNN for skin lesion diagnosis, which considered patient metadata and the skin lesion images. To analyze the contribution of each feature regarding the patient metadata, they adopted SHAP.…”
Section: Datasetmentioning
confidence: 99%
“…3 Department of Information Science and Engineering, Northeastern University, Shen Yang, China. 4 Department of Information Science and Engineering, Northeastern University, Shen Yang, China. 5 Department of Information Science and Engineering, Northeastern University, Shen Yang, China.…”
Section: Appendix Acknowledgementsmentioning
confidence: 99%
“…The recent advancements made in deep learning have been applied to different medical fields in order to detect (at an earlier stage) or predict certain anomalies [3][4][5]. In the ophthalmological domain, digital fundus images analysis using deep learning methods have started to receive a lot of attention.…”
Section: Introductionmentioning
confidence: 99%