2023
DOI: 10.1038/s41598-023-30480-8
|View full text |Cite
|
Sign up to set email alerts
|

Consecutive multiscale feature learning-based image classification model

Abstract: Extracting useful features at multiple scales is a crucial task in computer vision. The emergence of deep-learning techniques and the advancements in convolutional neural networks (CNNs) have facilitated effective multiscale feature extraction that results in stable performance improvements in numerous real-life applications. However, currently available state-of-the-art methods primarily rely on a parallel multiscale feature extraction approach, and despite exhibiting competitive accuracy, the models lead to … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 51 publications
0
3
0
Order By: Relevance
“…Table V compares the performance of our proposed bacteria classification models with other popular deep learning models including AlexNet [21], VGG [26], ResNet [22], DenseNet [27], SqueezeNet [28], vision transformer (ViT) [29], model soup [30] and Lion Fine-tune CNN [31]. Furthermore, we also compared the proposed method with the other two recent SOAT methods namely UzADL [32] and CMSFL [33] which used WIB-ReLU. As mentioned earlier, even though each model has its own based architecture, all of them were developed using the same parameter setting to maintain a fair basis for comparison.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table V compares the performance of our proposed bacteria classification models with other popular deep learning models including AlexNet [21], VGG [26], ResNet [22], DenseNet [27], SqueezeNet [28], vision transformer (ViT) [29], model soup [30] and Lion Fine-tune CNN [31]. Furthermore, we also compared the proposed method with the other two recent SOAT methods namely UzADL [32] and CMSFL [33] which used WIB-ReLU. As mentioned earlier, even though each model has its own based architecture, all of them were developed using the same parameter setting to maintain a fair basis for comparison.…”
Section: Resultsmentioning
confidence: 99%
“…Besides the aforementioned models, we also conduct a comparison of our proposed model with the most recent state-of-the-art (SOTA) image classification methods including vision transformer (ViT) [29], model soup [30], Lion Fine-tune CNN [31], UzADL [32] and CMSFL [33]. For such models, we regenerate the code and train using our bacteria dataset.…”
Section: ) Schedule Lengthmentioning
confidence: 99%
“…bution of our model in the clothing change. While ResNet-50 may serve as a backbone network for fair comparison, we also integrated a better backbone CMSFL [31] to further improve the performance of our framework. The knowledge embedding configuration used in our model is based on the pretraining module described in KST-GCN [32].…”
Section: Mean Average Precision (Map)mentioning
confidence: 99%
“…However, the computational complexity of the proposed model is significantly higher compared to traditional lightweight models. This is due to the parallel processing and multiple feature extraction layers involved in the HiFuse model [37]. presented a novel image classification system called CMSFL-Net, which utilizes a consecutive multiscale feature-learning approach.…”
mentioning
confidence: 99%