2018
DOI: 10.1007/978-3-030-00919-9_20
|View full text |Cite
|
Sign up to set email alerts
|

Retinal Blood Vessel Segmentation Using a Fully Convolutional Network – Transfer Learning from Patch- to Image-Level

Abstract: Fully convolutional networks (FCNs) are well known to provide state-of-the-art results in various medical image segmentation tasks. However, these models usually need a tremendous number of training samples to achieve good performances. Unfortunately, this requirement is often difficult to satisfy in the medical imaging field, due to the scarcity of labeled images. As a consequence, the common tricks for FCNs' training go from data augmentation and transfer learning to patch-based segmentation. In the latter, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…The tested U‐Net‐based CNNs were not only more efficient regarding the segmentation metrics but also regarding the training time. Accordingly, it has been reported that patch‐based approaches were less computationally effective than fully convolution networks with an image‐based learning approach 22,52 as they classify each pixel in an image separately. It is important to mention that, although the training time for the U‐Net optimized in nnU‐Net framework was longer than for the common U‐Nets, the overall time the authors spent for manual adaptation of the networks and for the grid search was much longer.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The tested U‐Net‐based CNNs were not only more efficient regarding the segmentation metrics but also regarding the training time. Accordingly, it has been reported that patch‐based approaches were less computationally effective than fully convolution networks with an image‐based learning approach 22,52 as they classify each pixel in an image separately. It is important to mention that, although the training time for the U‐Net optimized in nnU‐Net framework was longer than for the common U‐Nets, the overall time the authors spent for manual adaptation of the networks and for the grid search was much longer.…”
Section: Discussionmentioning
confidence: 99%
“…The noise was added as a Tensorflow layer directly to the CNN input. In addition to the training hyperparameters and as previously described, 22,42 number of layers (i.e., the depth of the CNN), order and presence of noise, dropout and batch normalization layers have been considered as adjustable architecture hyperparameters. Two variants of the U‐Nets were used: one with an original depth, that is, number of max‐poolings equal to 4, 30 and one with a reduced depth (Truncated U‐Net) that is, with three max‐poolings.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To handle such large environments without making any changes to the DNN, we use the standard approach of sliding windows with stride equal to window-width [3,13,27,34]. This crops the larger environment into pieces of the size of the training environments.…”
Section: -Dof Hinged Robot (H)mentioning
confidence: 99%
“…Since in the area of biomedical image processing, it is common to work with a limited number of examples, different training strategies are required. Frequently, this issue is overcome by applying transferred learning [ 18 , 25 , 29 ] or using patches rather than full images for training [ 9 , 11 , 19 , 23 ], while a few papers have proposed hybrid techniques for feature extraction [ 12 , 34 ]. In this paper, a new method that incorporates distorted Gaussian Matched Filters with adaptive parameters as part of a Deep Convolutional Architecture is proposed.…”
Section: Introductionmentioning
confidence: 99%