2020
DOI: 10.1142/s1793545821400022
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning algorithms to segment and quantify the choroidal thickness and vasculature in swept-source optical coherence tomography images

Abstract: Accurate segmentation of choroidal thickness (CT) and vasculature is important to better analyze and understand the choroid-related ocular diseases. In this paper, we proposed and implemented a novel and practical method based on the deep learning algorithms, residual U-Net, to segment and quantify the CT and vasculature automatically. With limited training data and validation data, the residual U-Net was capable of identifying the choroidal boundaries as precise as the manual segmentation compared with an exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(18 citation statements)
references
References 34 publications
1
17
0
Order By: Relevance
“…Instead of manually outlining the choroidal layers, we used custom-developed Python-based software and developed a ResNet-UNet to automatically identify the layers and measured the LV, TCV, and CVI within 6 mm of the center fovea. Our automatic method was validated using a manual method in our previous study, 15 and is more efficient and versatile for routine clinical practice and research. The study further reports the impact of penetration, analysis image method, and scanning speed in the acquisition of choroidal vascularity parameters in healthy and young cohort, suggesting the consideration of these factors in the research and clinical work.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Instead of manually outlining the choroidal layers, we used custom-developed Python-based software and developed a ResNet-UNet to automatically identify the layers and measured the LV, TCV, and CVI within 6 mm of the center fovea. Our automatic method was validated using a manual method in our previous study, 15 and is more efficient and versatile for routine clinical practice and research. The study further reports the impact of penetration, analysis image method, and scanning speed in the acquisition of choroidal vascularity parameters in healthy and young cohort, suggesting the consideration of these factors in the research and clinical work.…”
Section: Discussionmentioning
confidence: 99%
“…10,11 Previous studies have analyzed CVI using different image processing methods and various OCT instruments. 10,[12][13][14][15] It can be assumed that more OCT instruments with higher penetration and more automatic segmentation will be available for future research and routine clinical use. One may assume that higher penetration with less manual intervention on the image analysis may lead to better information for choroid vascularity measurements.…”
mentioning
confidence: 99%
“…Since Agrawal et al. first proposed the Niblack image binarization technique for EDI-OCT images, the choroid can be differentiated into luminal and stromal components ( 10 , 14 ). Through the acquisition of more detailed information on the choroidal structures using the deep-learning-based automatic segmentation and modified image binarization technique, our study demonstrated the effect of acute hyperglycemia on the choroidal components and CVI while modulating light adaptation in healthy participants.…”
Section: Discussionmentioning
confidence: 99%
“…A custom algorithm based on image binarization and AI segmentation of SS-OCT ( 14 ) was used to obtain the choroidal structural parameters and vascularity index, which has been described in detail in our previous studies ( 9 , 21 ). Briefly, the upper and lower boundaries of the choroid were automatically detected using an algorithm based on the deep learning network implemented in MATLAB 2017a (Mathworks, Inc., Natick, MA, USA).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation