Medical Imaging 2019: Image Processing 2019
DOI: 10.1117/12.2511572
|View full text |Cite
|
Sign up to set email alerts
|

Stack-U-Net: refinement network for improved optic disc and cup image segmentation

Abstract: In this work, we propose a special cascade network for image segmentation, which is based on the U-Net networks as building blocks and the idea of the iterative refinement. The model was mainly applied to achieve higher recognition quality for the task of finding borders of the optic disc and cup, which are relevant to the presence of glaucoma. Compared to a single U-Net and the state-of-the-art methods for the investigated tasks, the presented method outperforms others by multiple benchmarks without a need fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(33 citation statements)
references
References 24 publications
0
32
0
Order By: Relevance
“…From Table 5, We have achieved 2.8% and 9.23% higher than the original U-net in the Dice coefficients of the optic disc and the optic cup, respectively. When compared with the same improved method modified based on the U-net structure [16], [21], [10], [25], our method is better than the best improved method proposed by Shuang Yu [25], which is 0.42% higher in OD, and 2.46% in the more difficult OC segmentation task. On REFUGE dataset, our method is also better than that of M-net [16] using this dataset.…”
Section: Compared With U-shaped Networkmentioning
confidence: 84%
See 2 more Smart Citations
“…From Table 5, We have achieved 2.8% and 9.23% higher than the original U-net in the Dice coefficients of the optic disc and the optic cup, respectively. When compared with the same improved method modified based on the U-net structure [16], [21], [10], [25], our method is better than the best improved method proposed by Shuang Yu [25], which is 0.42% higher in OD, and 2.46% in the more difficult OC segmentation task. On REFUGE dataset, our method is also better than that of M-net [16] using this dataset.…”
Section: Compared With U-shaped Networkmentioning
confidence: 84%
“…In order to verify that our method is better than other methods, we compare our proposed method with the state-of-the-art approaches, such as pOSAL framework [19], GL-Net [20], M-Net [16], Stack-U-Net [21], WGAN [22], two-stage Mask R-CNN [23], multi-modal self-supervised pretraining network [24], Shuang Yu [25] and A. Sevastopolsky [10]. Additionally, we compare with the Fully convolutional network U-Net [15].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…From Table 5, We have achieved 2.8% and 9.23% higher than the original U-net in the Dice coefficients of the optic disc and the optic cup, respectively. When compared with the same improved method modified based on the U-net structure [16], [29], [10], [33], our method is better than the best improved method proposed by Shuang Yu [33], which is 0.42% higher in OD, and 2.46% in the more difficult OC segmentation task. On REFUGE dataset, our method is also better than that of M-net [16] using this dataset.…”
Section: Compared With U-shaped Networkmentioning
confidence: 84%
“…In order to verify that our method is better than other methods, we compare our proposed method with the state-of-the-art approaches, such as pOSAL framework [27], GL-Net [28], M-Net [16], Stack-U-Net [29], WGAN [30], two-stage Mask R-CNN [31], multi-modal self-supervised pretraining network [32], Shuang Yu [33] and A. Sevastopolsky [10]. Additionally, we compare with the Fully convolutional network U-Net [15].…”
Section: Discussionmentioning
confidence: 99%