2019
DOI: 10.1007/978-3-030-32239-7_10
|View full text |Cite
|
Sign up to set email alerts
|

Dual Encoding U-Net for Retinal Vessel Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
66
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 147 publications
(68 citation statements)
references
References 10 publications
1
66
1
Order By: Relevance
“…Table 4 compares the performance of different methods for retinal vessel segmentation on the DRIVE dataset. Compared with references [ 21 , 40 , 42 – 44 ], the sensitivity index is low, since many background pixels are still classified as vascular pixels by this method in this paper, but the specificity, accuracy, and F -measure indicators are all optimal. Compared with the method in the study by Samuel and Veeramalai [ 21 ], the specificity of this paper's method was improved by 0.84%.…”
Section: Methodsmentioning
confidence: 85%
See 1 more Smart Citation
“…Table 4 compares the performance of different methods for retinal vessel segmentation on the DRIVE dataset. Compared with references [ 21 , 40 , 42 – 44 ], the sensitivity index is low, since many background pixels are still classified as vascular pixels by this method in this paper, but the specificity, accuracy, and F -measure indicators are all optimal. Compared with the method in the study by Samuel and Veeramalai [ 21 ], the specificity of this paper's method was improved by 0.84%.…”
Section: Methodsmentioning
confidence: 85%
“…The segmentation results of the BFCN method are compared with the images of the segmentation results of the methods in the studies by Zhuang [ 43 ] and Wang et al [ 44 ]. The method in the studies by Zhuang [ 43 ] and Wang et al [ 44 ] will generate lots of artifacts, which will cause serious interference to the clinical diagnosis, while the method in this paper generates fewer artifacts. In summary, the method in this paper can effectively and accurately segment retinal blood vessel images.…”
Section: Methodsmentioning
confidence: 99%
“…Yan et al [38] train the U-Net simultaneously with a joint-loss including a pixel-wise and a segment-level loss. DEU-Net [39] introduces a feature fusion module to combine a spatial path with a large kernel to preserve the spatial information and a context path with a multiscale convolution block to capture more semantic information. DeepVessel [40] applies a multi-scale and multi-level Convolutional Neural Network (CNN) with a side-output layer to learn a rich hierarchical representation and model the long-range interactions between pixels by a Conditional Random Field.…”
Section: B Retinal Vessel Segmentationmentioning
confidence: 99%
“…We compare our network with several state-of-the-art models including VesselNet [55], U-Net [5], DU-Net [39], Lad-derNet [53], Bo Liu et al [54]. CE-Net [37].…”
Section: ) Experiments Performancementioning
confidence: 99%
See 1 more Smart Citation