2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) 2017
DOI: 10.1109/isbi.2017.7950512
|View full text |Cite
|
Sign up to set email alerts
|

A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation

Abstract: Automatic segmentation of retinal blood vessels from fundus images plays an important role in the computer aided diagnosis of retinal diseases. The task of blood vessel segmentation is challenging due to the extreme variations in morphology of the vessels against noisy background. In this paper, we formulate the segmentation task as a multi-label inference task and utilize the implicit advantages of the combination of convolutional neural networks and structured prediction. Our proposed convolutional neural ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
147
4

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 216 publications
(151 citation statements)
references
References 17 publications
0
147
4
Order By: Relevance
“…Further developments are presented in [41], [42], where Fully Convolutional Networks (FCN) are used to segment retinal vessels in color fundus photography images. The fully connected layers are replaced by deconvolutional layers allowing to obtain a faster and more precise vessel localization with respect to approaches based on fully connected layer classification.…”
Section: B Supervisedmentioning
confidence: 99%
See 1 more Smart Citation
“…Further developments are presented in [41], [42], where Fully Convolutional Networks (FCN) are used to segment retinal vessels in color fundus photography images. The fully connected layers are replaced by deconvolutional layers allowing to obtain a faster and more precise vessel localization with respect to approaches based on fully connected layer classification.…”
Section: B Supervisedmentioning
confidence: 99%
“…V-A) Oliveira et al [22] 2011 Liver CT Goceri et al [23] 2017 Liver MRI Bruyninckx et al [24] 2010 Liver CT Bruyninckx et al [25] 2009 Lung CT Asad et al [26] 2017 Retina CFP Mapayi et al [27] 2015 Retina CFP Sreejini et al [28] 2015 Retina CFP Cinsdikici et al [29] 2009 Retina CFP Al-Rawi et al [30] 2007 Retina CFP Hanaoka et al [31] 2015 Brain MRA Supervised machine learning Sironi et al [32] 2014 Brain Microscopy (Sec. V-B) Merkow et al [33] 2016 Cardiovascular and Lung CT and MRI Sankaran et al [34] 2016 Coronary CTA Schaap et al [35] 2011 Coronary CTA Zheng et al [36] 2011 Coronary CT Nekovei et al [37] 1995 Coronary CT Smistad et al [38] 2016 Femoral region, Carotid US Chu et al [39] 2016 Liver X-ray fluoroscopic Orlando et al [40] 2017 Retina CFP Dasgupta et al [41] 2017 Retina CFP Mo et al [42] 2017 Retina CFP Lahiri et al [43] 2017 Retina CFP Annunziata et al [44] 2016 Retina Microscopy Fu et al [45] 2016 Retina CFP Luo et al [46] 2016 Retina CFP Liskowski et al [47] 2016 Retina CFP Li et al [48] 2016 Retina CFP Javidi et al [49] 2016 Retina CFP Maninis et al [50] 2016 Retina CFP Prentasvic et al [51] 2016 Retina CT Wu et al [52] 2016 Retina CFP Annunziata et al [53] 2015 Retina Microscopy Annunziata et al [54] 2015 Retina Microscopy Vega et al [55] 2015 Retina CFP Wang et al [56] 2015 Retina CFP Fraz et al [57] 2014 Retina CFP Ganin et al [58] 2014 Retina CFP...…”
Section: Introductionmentioning
confidence: 99%
“…The proposed method was validated using DRIVE and STARE dataset for retinal vessel segmentation task where the area under recall-precision curve reached up to 0.822 for DRIVE dataset and 0.831 for STARE dataset. In context of CNN-based approaches, remarkable performances have been achieved by Liskowski et al [89] with supremum area under curve (AUC) of 0.99 and accuracy of 95.33%, while AUC of 0.974 was achieved by Dasgupta and Singh [91] for automated retinal vessel segmentation.…”
Section: Machine Learning Techniquesmentioning
confidence: 99%
“…Acc AUC Sens Spec Melinščak [4] 0.9466 0.9749 --Fu [5] 0.9523 -0.7603 -Li [7] 0.9527 0.9738 0.7569 0.9816 Dasgupta [8] 0.9533 0.9744 0.7691 0.9801 Yan [9] 0.9542 0.9752 0.7653 0.9818 Liskowski [10] 0.9251 0.9738 0.9160 0.9241 CapsNet [11] 0.9292 0.9638 0.7614 0.9731 Proposed 0.9547 0.9750 0.7651 0.9818 on the training images and combined the test predictions of each fold in an ensemble. We evaluated our model using the following metrics: Accuracy (Acc), Area under the curve (AUC), Sensitivity (Sens), and Specificity (Spec).…”
Section: Methodsmentioning
confidence: 99%
“…Tetteh et al [6] used the Inception architecture without pooling layers to classify each pixel in an image patch and extracted vessel centerlines. CNN autoencoder networks also classify each pixel of an image patch (e.g., [7], [8], [9]). The recent approach by Yan et al [9] takes into account the thickness of vessels by jointly adapting segment-level and pixel-wise loss functions.…”
Section: Introductionmentioning
confidence: 99%