ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053551
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Modal Self-Supervised Pre-Training for Joint Optic Disc and Cup Segmentation in Eye Fundus Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(24 citation statements)
references
References 16 publications
0
24
0
Order By: Relevance
“…While Aĺvaro S. Hervella etl. [24] propose a novel selfsupervised pre-training method to improve the segmentation of the optic disc and cup which also achieved a good result. All of the above methods have achieved good segmentation results, but through experimental comparison, our proposed method achieves a higher score than them, especially in OC segmentation task.…”
Section: Discussionmentioning
confidence: 96%
See 3 more Smart Citations
“…While Aĺvaro S. Hervella etl. [24] propose a novel selfsupervised pre-training method to improve the segmentation of the optic disc and cup which also achieved a good result. All of the above methods have achieved good segmentation results, but through experimental comparison, our proposed method achieves a higher score than them, especially in OC segmentation task.…”
Section: Discussionmentioning
confidence: 96%
“…In order to verify that our method is better than other methods, we compare our proposed method with the state-of-the-art approaches, such as pOSAL framework [19], GL-Net [20], M-Net [16], Stack-U-Net [21], WGAN [22], two-stage Mask R-CNN [23], multi-modal self-supervised pretraining network [24], Shuang Yu [25] and A. Sevastopolsky [10]. Additionally, we compare with the Fully convolutional network U-Net [15].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to verify that our method is better than other methods, we compare our proposed method with the state-of-the-art approaches, such as pOSAL framework [27], GL-Net [28], M-Net [16], Stack-U-Net [29], WGAN [30], two-stage Mask R-CNN [31], multi-modal self-supervised pretraining network [32], Shuang Yu [33] and A. Sevastopolsky [10]. Additionally, we compare with the Fully convolutional network U-Net [15].…”
Section: Discussionmentioning
confidence: 99%