2021 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) 2021
DOI: 10.1109/ispacs51563.2021.9651024
|View full text |Cite
|
Sign up to set email alerts
|

Diabetic Retinopathy Detection using CNN, Transformer and MLP based Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(3 citation statements)
references
References 2 publications
0
3
0
Order By: Relevance
“…Due to the excellent performance of the vanilla Transformer in various computer vision tasks, many recent works [12], [26]- [29] have explored the effectiveness of the Transformer architecture to address the limitation of CNNs in fundus image classification. Especially, N. S. Kumar et al [30] evaluate the DR grading architectures of Transformer, CNN, and Multi-Layer Perceptron (MLP) in terms of model convergence time, accuracy, and model scale, demonstrating that the Transformer-based model outperforms the CNN and MLP architectures in terms of not only accuracy but also achieving comparable model convergence time.…”
Section: B Transformer Based Methodsmentioning
confidence: 99%
“…Due to the excellent performance of the vanilla Transformer in various computer vision tasks, many recent works [12], [26]- [29] have explored the effectiveness of the Transformer architecture to address the limitation of CNNs in fundus image classification. Especially, N. S. Kumar et al [30] evaluate the DR grading architectures of Transformer, CNN, and Multi-Layer Perceptron (MLP) in terms of model convergence time, accuracy, and model scale, demonstrating that the Transformer-based model outperforms the CNN and MLP architectures in terms of not only accuracy but also achieving comparable model convergence time.…”
Section: B Transformer Based Methodsmentioning
confidence: 99%
“…They obtained an Acc of 0.861, a Se of 0.854, and a Sp of 0.875. Kumar and Karthikeyan ( 2021 ) used different models such as EfficientNet and Swin-Transformer with 3,600 fundus images and obtained an Acc of 0.864 on Swin-Transformer. Lahmar and Idri ( 2022 ) presented automatic two-class classification using 28 hybrid architectures.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Recently, Kumar et al [ 19 ] tested the classification performance of several major CNNs and Transformers as well as MLPs on APTOS dataset. They found that Transformers perform better than CNNs and MLPs overall.…”
Section: Introductionmentioning
confidence: 99%