2022
DOI: 10.1007/s11042-022-13046-0
|View full text |Cite
|
Sign up to set email alerts
|

Detecting skin lesions fusing handcrafted features in image network ensembles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Relative to alternative models for skin lesion classification, our proposed SkinSwinViT model exhibits superior advancement. Specifically, compared to the extended hybrid model + handcrafted feature model by Sharafudeen et al (2023) [38], our SkinSwinViT model significantly enhances predictive accuracy, precision, and specificity while maintaining a more parsimonious architecture, with improvements of 5.9%, 3.7%, and 1.6%, respectively. Furthermore, compared to the outstanding BF 2 SkNet model by Ajmal et al (2023) [43], our SkinSwinViT model demonstrates exceptional performance, with increases in accuracy and precision of 0.7% and 2.7%, respectively, under similar complexity conditions.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Relative to alternative models for skin lesion classification, our proposed SkinSwinViT model exhibits superior advancement. Specifically, compared to the extended hybrid model + handcrafted feature model by Sharafudeen et al (2023) [38], our SkinSwinViT model significantly enhances predictive accuracy, precision, and specificity while maintaining a more parsimonious architecture, with improvements of 5.9%, 3.7%, and 1.6%, respectively. Furthermore, compared to the outstanding BF 2 SkNet model by Ajmal et al (2023) [43], our SkinSwinViT model demonstrates exceptional performance, with increases in accuracy and precision of 0.7% and 2.7%, respectively, under similar complexity conditions.…”
Section: Discussionmentioning
confidence: 99%
“…MobileNet+handcrafted features [36] 92.4 92.1 90.0 ResNet+Inceptionv3 [37] 85.1 79.6 82.91 Hybrid Model+handcrafted features [38] 91.9 94.1 97.7 Deep learning and moth flame optimization [39] 90.6 ----A CNN-based pigmented framework [40] 91.5 ----A CNN and nature-inspired optimization algorithm [41] 91.7 92.4 --Two-stream CNN framework [42] 96.5 ----BF 2 SkNet model [43] 97.1 95.1 --Proposed SkinSwinViT 97.8 97.8 99.3…”
Section: Techniquementioning
confidence: 99%
“…A mix of images, hand-extracted features, and metadata is used in [131] to perform a multiclass classification based on ensemble networks. Multiple multi-input single-output (MISO) models, obtained by replacing the backbones with EfficientNet networks B4 to B7, are trained with the images to extract features, whereas the hand-extracted features and metadata are used for training an MPL with two dense layers.…”
Section: Ml/dl Hybrid Techniquesmentioning
confidence: 99%
“…CNN uses its deep layers as feature extractors [25]. CNN features are trained using training data, whereas experts craft HC features to determine a particular set of attributes [26]. The need to compute resources and the accessibility of huge training sets are the primary barriers to using a CNN as an effective feature extractor [27].…”
Section: Introductionmentioning
confidence: 99%