2020
DOI: 10.1016/j.bbe.2020.07.007
|View full text |Cite
|
Sign up to set email alerts
|

A deep Residual U-Net convolutional neural network for automated lung segmentation in computed tomography images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 98 publications
(64 citation statements)
references
References 16 publications
0
52
0
Order By: Relevance
“…In [ 41 ], an algorithm based on random forest, deep convolutional network, and multi-scale super-pixels was proposed for segmenting lungs with interstitial lung disease (IDL) using the ILDs database [ 42 ] with an average DSC of 96.45%. Khanna et al [ 43 ] implemented the residual U-Net with a false-positive removal algorithm using a training set of 173 images from three publicly available benchmark datasets, namely LUNA, VESSEL12, and HUG-ILD. Specifically, they implemented a U-Net with residual block, to overcome the problem of performance degradation, and various data augmentation techniques to improve the generalization capability of the method, obtaining a DSC > 98.63% using the five-fold cross-validation technique.…”
Section: Discussionmentioning
confidence: 99%
“…In [ 41 ], an algorithm based on random forest, deep convolutional network, and multi-scale super-pixels was proposed for segmenting lungs with interstitial lung disease (IDL) using the ILDs database [ 42 ] with an average DSC of 96.45%. Khanna et al [ 43 ] implemented the residual U-Net with a false-positive removal algorithm using a training set of 173 images from three publicly available benchmark datasets, namely LUNA, VESSEL12, and HUG-ILD. Specifically, they implemented a U-Net with residual block, to overcome the problem of performance degradation, and various data augmentation techniques to improve the generalization capability of the method, obtaining a DSC > 98.63% using the five-fold cross-validation technique.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, the best model was trained and validated five times to ensure that its performance was not due to random factors inherent in the training process such as the initialization of the weights. Using this procedure, the best model configuration identified used a batch size of 8, an equally weighted combined loss function of soft-Dice and categorical cross-entropy, batch normalization, and ReLU activations in the hidden layers [25, 26] (Fig. 3).…”
Section: Methodsmentioning
confidence: 99%
“…Residual connections were originally introduced with the ResNet architecture [51] and were frequently incorporated into CNN architectures as ResNet models converge better thus enable deeper networks to perform considerably better than their shallow counterparts [31,51]. U-Net architectures with residual connections were designed, e.g., for road extraction from aerial remote sensing imagery [42], for urban land cover classification in aerial and satellite imagery [52,53], for sea-land segmentation [54,55], for tree species classification in airborne imagery [56], for semantic segmentation of ships in optical remote sensing imagery [57], and also for biomedical image segmentation [58] (also see [31]). The subsequent max pooling operation is performed by taking the maximum value over a 2 × 2 window in order to halve the image size before entering the next downsampling block.…”
Section: Post-processingmentioning
confidence: 99%