2020
DOI: 10.1136/bjophthalmol-2020-315817
|View full text |Cite
|
Sign up to set email alerts
|

Automated diagnoses of age-related macular degeneration and polypoidal choroidal vasculopathy using bi-modal deep convolutional neural networks

Abstract: AimsTo investigate the efficacy of a bi-modality deep convolutional neural network (DCNN) framework to categorise age-related macular degeneration (AMD) and polypoidal choroidal vasculopathy (PCV) from colour fundus images and optical coherence tomography (OCT) images.MethodsA retrospective cross-sectional study was proposed of patients with AMD or PCV who came to Peking Union Medical College Hospital. Diagnoses of all patients were confirmed by two retinal experts based on diagnostic gold standard for AMD and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
33
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 34 publications
(33 citation statements)
references
References 32 publications
0
33
0
Order By: Relevance
“…To validate our approach, we selected the following three baseline methods to compare with our method: (1) training two CNN models that classifies the OCT and CFP modality respectively into three categories (RPN, RPP, and uninterpretable), then the final diagnoses are determined following the LCM illustrated in Appendix A and Supplementary Table S1 ; (2) first, two classifiers are trained to classify the interpretability for the OCT and CFP modality separately, followed by two CNN models that identify the presence of retinal pathology for interpretable OCT and CFP images respectively, with the final diagnoses being determined using the LCM; and (3) a two-stream CNN model based on the state-of-the-art multimodal ophthalmological image analysis methods developed by Wang et al, 11 13 which uses the CNN architecture that does not consider any uninterpretable images, but is trained to minimize the cross-entropy loss with conventional backpropagation algorithms, instead of the AGD, as proposed in our work. In Appendix D, we illustrated the intuition of designing Baseline A and B and their implementation details.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…To validate our approach, we selected the following three baseline methods to compare with our method: (1) training two CNN models that classifies the OCT and CFP modality respectively into three categories (RPN, RPP, and uninterpretable), then the final diagnoses are determined following the LCM illustrated in Appendix A and Supplementary Table S1 ; (2) first, two classifiers are trained to classify the interpretability for the OCT and CFP modality separately, followed by two CNN models that identify the presence of retinal pathology for interpretable OCT and CFP images respectively, with the final diagnoses being determined using the LCM; and (3) a two-stream CNN model based on the state-of-the-art multimodal ophthalmological image analysis methods developed by Wang et al, 11 13 which uses the CNN architecture that does not consider any uninterpretable images, but is trained to minimize the cross-entropy loss with conventional backpropagation algorithms, instead of the AGD, as proposed in our work. In Appendix D, we illustrated the intuition of designing Baseline A and B and their implementation details.…”
Section: Resultsmentioning
confidence: 99%
“…[38][39][40][41] This paper introduces a CNN-based approach that enables fully automated retinal image classification into present or absent retinal pathology. Similar existing methods cannot be applied autonomously as they have not been developed while considering uninterpretable images, which are frequently encountered during eye screening, 2,3,[7][8][9][10][11][12][13][14][15][16][17][18][19] and thus cannot handle them well. By addressing these limitations, our approach facilitates the development of automated retinal diagnosis systems, where a healthcare worker does not need to evaluate the quality of the images (in order for some to be retaken) before they are submitted for the analysis.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations