2020
DOI: 10.1007/s11042-020-09637-4
|View full text |Cite
|
Sign up to set email alerts
|

Efficient fusion of handcrafted and pre-trained CNNs features to classify melanoma skin cancer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(21 citation statements)
references
References 51 publications
1
20
0
Order By: Relevance
“…While other reviewed skin lesion classification methods were based on Optimized neutrosophic k-means (ONKM) in [74] Genetic algorithm for optimizing the value of α in α-mean operation in the neutrosophic set. Also, adaptive k-means and Random Fores in [76], Multiclass SVM in [77,78,91], SVM, ANN, KNN, and TDV in [83]. A buzzard optimization function extraction algorithm and SVM classifier provides accuracy 94.3% and the buzzard optimization for feature extraction is awesome.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…While other reviewed skin lesion classification methods were based on Optimized neutrosophic k-means (ONKM) in [74] Genetic algorithm for optimizing the value of α in α-mean operation in the neutrosophic set. Also, adaptive k-means and Random Fores in [76], Multiclass SVM in [77,78,91], SVM, ANN, KNN, and TDV in [83]. A buzzard optimization function extraction algorithm and SVM classifier provides accuracy 94.3% and the buzzard optimization for feature extraction is awesome.…”
Section: Discussionmentioning
confidence: 99%
“…Filali et al [91] suggested a potent lesional skin classification technique based on a mixture of the most potent DL architectures (VggNet, ResNet, GoogLeNet, and AlexNet) and features of handcrafted (color, texture, skeleton, and shape) in order to diagnostic melanoma cancer. The model extracts from each pre-trained CNN 1000 features, and all the pre-trained models extracted 4000 features.…”
Section: Fig 9 Densenet Architecturementioning
confidence: 99%
“…Although it has already been published that the effectiveness of DNNs outperforms those of hand-crafted descriptors, these methods were shown to be better at discriminating stationary textures under steady imaging conditions and proved more robust than DNN-based features to, for example, image rotation [ 61 ]. Moreover, the concatenation of handcrafted features (shape, skeleton, color, and texture) and features extracted from the most powerful deep learning architectures followed by classification using for standard classifiers (e.g., support vector machines) was shown to offer high classification performance [ 62 ]. Future efforts would be directed to confirm if our approach offers any advantages compared to the hand-designed methods.…”
Section: Discussionmentioning
confidence: 99%
“…Feature fusion has been widely adopted by researchers for detection and classification tasks relevant to computed vision [45]- [49]. We have used GIST descriptors with default value.…”
Section: ) Local Binary Pattern (Lbp)mentioning
confidence: 99%