Background Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted.Methods In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FindingsThe area under the receiver operating characteristic curve (AUC) in the internal validation set was 0•955 (SD 0•046). AUC values in the external test set were 0•965 (0•035) in tertiary hospitals, 0•983 (0•031) in community hospitals, and 0•953 (0•042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0•960, 95% CI 0•957-0•964 in referable diabetic retinopathy).Interpretation Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care.
Retinal detachment can lead to severe visual loss if not treated timely. The early diagnosis of retinal detachment can improve the rate of successful reattachment and the visual results, especially before macular involvement. Manual retinal detachment screening is timeconsuming and labour-intensive, which is difficult for large-scale clinical applications. In this study, we developed a cascaded deep learning system based on the ultra-widefield fundus images for automated retinal detachment detection and macula-on/off retinal detachment discerning. The performance of this system is reliable and comparable to an experienced ophthalmologist. In addition, this system can automatically provide guidance to patients regarding appropriate preoperative posturing to reduce retinal detachment progression and the urgency of retinal detachment repair. The implementation of this system on a global scale may drastically reduce the extent of vision impairment resulting from retinal detachment by providing timely identification and referral.
IMPORTANCE Evaluating corneal morphologic characteristics with corneal tomographic scans before refractive surgery is necessary to exclude patients with at-risk corneas and keratoconus. In previous studies, researchers performed screening with machine learning methods based on specific corneal parameters. To date, a deep learning algorithm has not been used in combination with corneal tomographic scans.OBJECTIVE To examine the use of a deep learning model in the screening of candidates for refractive surgery.
Artificial intelligence (AI) based on machine learning (ML) and deep learning (DL) techniques has gained tremendous global interest in this era. Recent studies have demonstrated the potential of AI systems to provide improved capability in various tasks, especially in image recognition field. As an imagecentric subspecialty, ophthalmology has become one of the frontiers of AI research. Trained on optical coherence tomography, slit-lamp images and even ordinary eye images, AI can achieve robust performance in the detection of glaucoma, corneal arcus and cataracts. Moreover, AI models based on other forms of data also performed satisfactorily. Nevertheless, several challenges with AI application in ophthalmology have also arisen, including standardization of data sets, validation and applicability of AI models, and ethical issues. In this review, we provided a summary of the state-of-the-art AI application in anterior segment ophthalmic diseases, potential challenges in clinical implementation and our prospects.
Background: Lattice degeneration and/or retinal breaks, defined as notable peripheral retinal lesions (NPRLs), are prone to evolving into rhegmatogenous retinal detachment which can cause severe visual loss. However, screening NPRLs is time-consuming and labor-intensive. Therefore, we aimed to develop and evaluate a deep learning (DL) system for automated identifying NPRLs based on ultra-widefield fundus (UWF) images.Methods: A total of 5,606 UWF images from 2,566 participants were used to train and verify a DL system.All images were classified by 3 experienced ophthalmologists. The reference standard was determined when an agreement was achieved among all 3 ophthalmologists, or adjudicated by another retinal specialist if disagreements existed. An independent test set of 750 images was applied to verify the performance of 12 DL models trained using 4 different DL algorithms (InceptionResNetV2, InceptionV3, ResNet50, and VGG16) with 3 preprocessing techniques (original, augmented, and histogram-equalized images). Heatmaps were generated to visualize the process of the best DL system in the identification of NPRLs.Results: In the test set, the best DL system for identifying NPRLs achieved an area under the curve (AUC) of 0.999 with a sensitivity and specificity of 98.7% and 99.2%, respectively. The best preprocessing method in each algorithm was the application of original image augmentation (average AUC =0.996). The best algorithm in each preprocessing method was InceptionResNetV2 (average AUC =0.996). In the test set, 150 of 154 true-positive cases (97.4%) displayed heatmap visualization in the NPRL regions.Conclusions: A DL system has high accuracy in identifying NPRLs based on UWF images. This system may help to prevent the development of rhegmatogenous retinal detachment by early detection of NPRLs.
Background/aimsTo apply deep learning technology to develop an artificial intelligence (AI) system that can identify vision-threatening conditions in high myopia patients based on optical coherence tomography (OCT) macular images.MethodsIn this cross-sectional, prospective study, a total of 5505 qualified OCT macular images obtained from 1048 high myopia patients admitted to Zhongshan Ophthalmic Centre (ZOC) from 2012 to 2017 were selected for the development of the AI system. The independent test dataset included 412 images obtained from 91 high myopia patients recruited at ZOC from January 2019 to May 2019. We adopted the InceptionResnetV2 architecture to train four independent convolutional neural network (CNN) models to identify the following four vision-threatening conditions in high myopia: retinoschisis, macular hole, retinal detachment and pathological myopic choroidal neovascularisation. Focal Loss was used to address class imbalance, and optimal operating thresholds were determined according to the Youden Index.ResultsIn the independent test dataset, the areas under the receiver operating characteristic curves were high for all conditions (0.961 to 0.999). Our AI system achieved sensitivities equal to or even better than those of retina specialists as well as high specificities (greater than 90%). Moreover, our AI system provided a transparent and interpretable diagnosis with heatmaps.ConclusionsWe used OCT macular images for the development of CNN models to identify vision-threatening conditions in high myopia patients. Our models achieved reliable sensitivities and high specificities, comparable to those of retina specialists and may be applied for large-scale high myopia screening and patient follow-up.
Background/AimsTo develop a deep learning system for automated glaucomatous optic neuropathy (GON) detection using ultra-widefield fundus (UWF) images.MethodsWe trained, validated and externally evaluated a deep learning system for GON detection based on 22 972 UWF images from 10 590 subjects that were collected at 4 different institutions in China and Japan. The InceptionResNetV2 neural network architecture was used to develop the system. The area under the receiver operating characteristic curve (AUC), sensitivity and specificity were used to assess the performance of detecting GON by the system. The data set from the Zhongshan Ophthalmic Center (ZOC) was selected to compare the performance of the system to that of ophthalmologists who mainly conducted UWF image analysis in clinics.ResultsThe system for GON detection achieved AUCs of 0.983–0.999 with sensitivities of 97.5–98.2% and specificities of 94.3–98.4% in four independent data sets. The most common reasons for false-negative results were confounding optic disc characteristics caused by high myopia or pathological myopia (n=39 (53%)). The leading cause for false-positive results was having other fundus lesions (n=401 (96%)). The performance of the system in the ZOC data set was comparable to that of an experienced ophthalmologist (p>0.05).ConclusionOur deep learning system can accurately detect GON from UWF images in an automated fashion. It may be used as a screening tool to improve the accessibility of screening and promote the early diagnosis and management of glaucoma.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.