2020
DOI: 10.1016/j.bspc.2020.102041
|View full text |Cite
|
Sign up to set email alerts
|

Assisted deep learning framework for multi-class skin lesion classification considering a binary classification support

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(26 citation statements)
references
References 12 publications
1
25
0
Order By: Relevance
“…This innovative application has achieved excellent classification performance over conventional intelligent methods [24]- [26]. Esteva et al [24] demonstrated classification of skin lesions using a single convolutional neural network (CNN), trained end-to-end from images directly, using only pixels and disease labels as inputs, and it achieved performance on identification of the deadliest skin cancer and most common cancers; Gessert et al [25] proposed a patch-based attention architecture that provides global context between small and high-resolution patches with a novel diagnosis guided loss weighting method, outperformed previous methods and improves the mean sensitivity by 7%; Harangi et al [26] designed a deep convolutional neural network framework to classify dermoscopy images into seven classes, using GoogLeNet Inception-v3 and achieving remarkable improvement of 7%; Hosny et al [23] ing obstacles for the model's expansibility and efficiency, because it is necessary to convene professional doctors with costing much time to generate massive annotations. For the sake of relaxing this inconvenience, many exploitations [13], [14], [27]- [30] have been investigated to apply transfer learning into skin lesion classification.…”
Section: A Supervised Methods In Skin Lesion Classificationmentioning
confidence: 99%
“…This innovative application has achieved excellent classification performance over conventional intelligent methods [24]- [26]. Esteva et al [24] demonstrated classification of skin lesions using a single convolutional neural network (CNN), trained end-to-end from images directly, using only pixels and disease labels as inputs, and it achieved performance on identification of the deadliest skin cancer and most common cancers; Gessert et al [25] proposed a patch-based attention architecture that provides global context between small and high-resolution patches with a novel diagnosis guided loss weighting method, outperformed previous methods and improves the mean sensitivity by 7%; Harangi et al [26] designed a deep convolutional neural network framework to classify dermoscopy images into seven classes, using GoogLeNet Inception-v3 and achieving remarkable improvement of 7%; Hosny et al [23] ing obstacles for the model's expansibility and efficiency, because it is necessary to convene professional doctors with costing much time to generate massive annotations. For the sake of relaxing this inconvenience, many exploitations [13], [14], [27]- [30] have been investigated to apply transfer learning into skin lesion classification.…”
Section: A Supervised Methods In Skin Lesion Classificationmentioning
confidence: 99%
“…The findings indicated that CNN Xception obtained a superior accuracy rate at 89%. A DCNN architecture for binary classification supporting multiple classes was proposed in [70] that offers greater outcome reliability in significant probabilities. An identical CNN architecture (GoofgLeNet Inception-v3) trained classification of multiple and binary classes concurrently.…”
Section: Deep Learning With Transfer Learning and Image Augmentationmentioning
confidence: 99%
“…Afterwards, the result (feature maps) at lth convolutional layer will be activated by an activation function to get non-linear features. In this model, the ReLu [29] activation was selected by the elite, which inverses the input x from negative to positive and keeps the positive values, as in Equation (15). The max(0, x) indicates ReLu function, F(x) is the ReLu output.…”
Section: Multimodal Fusionmentioning
confidence: 99%
“…The model accuracy was 0.97 on patient-level. Harangi et al [15] employed the deep GoogleNet Inception for classifying dermoscopy images. The accuracy reached 0.677.…”
Section: Introductionmentioning
confidence: 99%