2022
DOI: 10.1038/s41598-022-18646-2
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of deep learning U-Net architectures for posterior segment OCT retinal layer segmentation

Abstract: Deep learning methods have enabled a fast, accurate and automated approach for retinal layer segmentation in posterior segment OCT images. Due to the success of semantic segmentation methods adopting the U-Net, a wide range of variants and improvements have been developed and applied to OCT segmentation. Unfortunately, the relative performance of these methods is difficult to ascertain for OCT retinal layer segmentation due to a lack of comprehensive comparative studies, and a lack of proper matching between n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 57 publications
0
10
0
Order By: Relevance
“…Despite the basic nature of our method, it demonstrated high performance and satisfactory segmentations. While more advanced variants of the U-Net could be further explored, previous research has indicated that they may not necessarily result in significant performance improvements and meanwhile increase complexity and require a higher demand on computational resources 43 …”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite the basic nature of our method, it demonstrated high performance and satisfactory segmentations. While more advanced variants of the U-Net could be further explored, previous research has indicated that they may not necessarily result in significant performance improvements and meanwhile increase complexity and require a higher demand on computational resources 43 …”
Section: Discussionmentioning
confidence: 99%
“…While more advanced variants of the U-Net could be further explored, previous research has indicated that they may not necessarily result in significant performance improvements and meanwhile increase complexity and require a higher demand on computational resources. 43 The ON quantification method developed in this study was validated based on manual reference measurements. However, a direct comparison with manual measurements is not straightforward as the offset between the model's origin and manual reference varies for each nerve, and manual measurements are subjective and sensitive to errors.…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…A segmentation pipeline using a deep learning approach was applied to segment the two retinal layer boundaries delineating the GCIPL, the retinal nerve fibre layer–ganglion cell layer (RNFL–GCL) and inner plexiform layer–inner nuclear layer (IPL–INL) boundaries (Figure 1). In brief, this consisted of cropped OCT images, used to improve generalisability to unseen OCT data, input into a typical encoder–decoder neural network (U‐Net) 37,38 . The output of this process was the per‐pixel classification of OCT B‐scan images into one of three classes, representing the regions separated by the RNFL–GCL and IPL–INL boundaries, which was subsequently used to infer boundary locations via graph search.…”
Section: Methodsmentioning
confidence: 99%
“…In brief, this consisted of cropped OCT images, used to improve generalisability to unseen OCT data, input into a typical encoder-decoder neural network (U-Net). 37,38 The output of this process was the per-pixel classification of OCT B-scan images into one of three classes, representing the regions separated by the RNFL-GCL and IPL-INL boundaries, which was subsequently used to infer boundary locations via graph search. This process was first applied with the raw widefield OCT image to produce initial predicted layer boundary positions, which were subsequently refined by repeating the process using a flattened OCT image, to normalise the appearance of the retinal tissue structure in the image provided to the network.…”
Section: Widefield Oct Segmentation and Processing For Gcipl Measurem...mentioning
confidence: 99%
See 1 more Smart Citation