2021
DOI: 10.1109/jbhi.2021.3052044
|View full text |Cite
|
Sign up to set email alerts
|

A Visually Interpretable Deep Learning Framework for Histopathological Image-Based Skin Cancer Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(32 citation statements)
references
References 45 publications
0
31
0
1
Order By: Relevance
“…e hyperparameters of the competitive models are obtained from their published papers. ese competitive models are CNN [10], EDLP [9], SqueezeNet [11], LWADL [12], Unet-dCNN [13], ResNet-50 [15], FADEM [16], and Ensemble CNN [17]. Table 2 demonstrates the testing analysis of the proposed and competitive models.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…e hyperparameters of the competitive models are obtained from their published papers. ese competitive models are CNN [10], EDLP [9], SqueezeNet [11], LWADL [12], Unet-dCNN [13], ResNet-50 [15], FADEM [16], and Ensemble CNN [17]. Table 2 demonstrates the testing analysis of the proposed and competitive models.…”
Section: Discussionmentioning
confidence: 99%
“…In [12], a lightweight attention-based deep learning model (LWADL) was designed to predict eleven skin diseases. LWADL achieved better accuracy as compared to VGG19, VGG16, ResNet50, and InceptionV3.…”
Section: Related Workmentioning
confidence: 99%
“…Combined with Grad-CAM and UMAP embedding methods, we further provided an intuitive visualization of the local and global feature patterns of all EMB images learned by the VGG-19 model. Unlike other applications in cancer (24,(30)(31)(32), the implementation of this new model in myocardial injury reveals ill-defined histopathological patterns in local regions, providing a guideline and attention maps for welltrained pathologists. Therefore, integrating VGG-19 with Grad-CAM and UMAP embedding methods provides an interpretive DNN model for more accurate histopathological analyses.…”
Section: Discussionmentioning
confidence: 99%
“…V2 is a CNN model designed specifically for portable and resourceconstrained circumstances. It is founded on an upturned residual structure in which the connections of the residual structure are linked to the bottleneck layers [30]. There are 153 layers in MobileNet V2, and the size of the input layer is h × w × k, where h = 224, w = 224, and k represents the channels, three of which are in the first layer.…”
Section: Mobilenetv2-mobilenetmentioning
confidence: 99%