Background Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted.Methods In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FindingsThe area under the receiver operating characteristic curve (AUC) in the internal validation set was 0•955 (SD 0•046). AUC values in the external test set were 0•965 (0•035) in tertiary hospitals, 0•983 (0•031) in community hospitals, and 0•953 (0•042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0•960, 95% CI 0•957-0•964 in referable diabetic retinopathy).Interpretation Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care.
Objective: This study aims to implement and investigate the application of a special intelligent diagnostic system based on deep learning in the diagnosis of pterygium using anterior segment photographs.Methods: A total of 1,220 anterior segment photographs of normal eyes and pterygium patients were collected for training (using 750 images) and testing (using 470 images) to develop an intelligent pterygium diagnostic model. The images were classified into three categories by the experts and the intelligent pterygium diagnosis system: (i) the normal group, (ii) the observation group of pterygium, and (iii) the operation group of pterygium. The intelligent diagnostic results were compared with those of the expert diagnosis. Indicators including accuracy, sensitivity, specificity, kappa value, the area under the receiver operating characteristic curve (AUC), as well as 95% confidence interval (CI) and F1-score were evaluated.Results: The accuracy rate of the intelligent diagnosis system on the 470 testing photographs was 94.68%; the diagnostic consistency was high; the kappa values of the three groups were all above 85%. Additionally, the AUC values approached 100% in group 1 and 95% in the other two groups. The best results generated from the proposed system for sensitivity, specificity, and F1-scores were 100, 99.64, and 99.74% in group 1; 90.06, 97.32, and 92.49% in group 2; and 92.73, 95.56, and 89.47% in group 3, respectively.Conclusion: The intelligent pterygium diagnosis system based on deep learning can not only judge the presence of pterygium but also classify the severity of pterygium. This study is expected to provide a new screening tool for pterygium and benefit patients from areas lacking medical resources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.