PurposeTo establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage.MethodsThe training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel healthcare facilities and capture modes. The datasets were labelled using a three-step strategy: (1) capture mode recognition; (2) cataract diagnosis as a normal lens, cataract or a postoperative eye and (3) detection of referable cataracts with respect to aetiology and severity. Moreover, we integrated the cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary healthcare and specialised hospital services.ResultsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in three-step tasks: (1) capture mode recognition (area under the curve (AUC) 99.28%–99.71%), (2) cataract diagnosis (normal lens, cataract or postoperative eye with AUCs of 99.82%, 99.96% and 99.93% for mydriatic-slit lamp mode and AUCs >99% for other capture modes) and (3) detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30.3% of people be ‘referred’, substantially increasing the ophthalmologist-to-population service ratio by 10.2-fold compared with the traditional pattern.ConclusionsThe universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.
Retinal detachment can lead to severe visual loss if not treated timely. The early diagnosis of retinal detachment can improve the rate of successful reattachment and the visual results, especially before macular involvement. Manual retinal detachment screening is timeconsuming and labour-intensive, which is difficult for large-scale clinical applications. In this study, we developed a cascaded deep learning system based on the ultra-widefield fundus images for automated retinal detachment detection and macula-on/off retinal detachment discerning. The performance of this system is reliable and comparable to an experienced ophthalmologist. In addition, this system can automatically provide guidance to patients regarding appropriate preoperative posturing to reduce retinal detachment progression and the urgency of retinal detachment repair. The implementation of this system on a global scale may drastically reduce the extent of vision impairment resulting from retinal detachment by providing timely identification and referral.
Background: Lattice degeneration and/or retinal breaks, defined as notable peripheral retinal lesions (NPRLs), are prone to evolving into rhegmatogenous retinal detachment which can cause severe visual loss. However, screening NPRLs is time-consuming and labor-intensive. Therefore, we aimed to develop and evaluate a deep learning (DL) system for automated identifying NPRLs based on ultra-widefield fundus (UWF) images.Methods: A total of 5,606 UWF images from 2,566 participants were used to train and verify a DL system.All images were classified by 3 experienced ophthalmologists. The reference standard was determined when an agreement was achieved among all 3 ophthalmologists, or adjudicated by another retinal specialist if disagreements existed. An independent test set of 750 images was applied to verify the performance of 12 DL models trained using 4 different DL algorithms (InceptionResNetV2, InceptionV3, ResNet50, and VGG16) with 3 preprocessing techniques (original, augmented, and histogram-equalized images). Heatmaps were generated to visualize the process of the best DL system in the identification of NPRLs.Results: In the test set, the best DL system for identifying NPRLs achieved an area under the curve (AUC) of 0.999 with a sensitivity and specificity of 98.7% and 99.2%, respectively. The best preprocessing method in each algorithm was the application of original image augmentation (average AUC =0.996). The best algorithm in each preprocessing method was InceptionResNetV2 (average AUC =0.996). In the test set, 150 of 154 true-positive cases (97.4%) displayed heatmap visualization in the NPRL regions.Conclusions: A DL system has high accuracy in identifying NPRLs based on UWF images. This system may help to prevent the development of rhegmatogenous retinal detachment by early detection of NPRLs.
Background Common diseases are not satisfactorily managed under the current health-care system because of inadequate medical resources and limited accessibility. We aimed to establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios, and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage. Methods The training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel health-care facilities and capture modes. The datasets were labeled using a three-step strategy: capture mode recognition (modes: mydriatic-diffuse, mydriatic-slit lamp, non-mydriatic-diffuse, and nonmydriatic-slit lamp); cataract diagnosis as a normal lens, cataract, or a postoperative eye; and detection of referable cataracts with respect to cause and severity. Area under curve [AUC] was measured at each stage. We also integrated the above cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary health care, and specialised hospital services. The diagnostic accuracy, treatment referral, and ophthalmologist-topopulation service ratio were used to evaluate the performance and efficacy of the system. Findings The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in threestep tasks: capture mode recognition (AUC 99•28-99•71% for the four different capture modes), cataract diagnosis (AUC for mydriatic-slit lamp mode 99•82% [95%CI 98•93-100] for normal lens vs 99•96% [99•90-100] for cataract vs 99•93% [99•78-100] for postoperative eye, and AUCs >99% for other capture modes), and detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30•3% of people be referred to treatment, substantially increasing the ophthalmologist-to-population service ratio by 10•2-times compared with the traditional pattern. Interpretation The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.
Background/AimsTo develop a deep learning system for automated glaucomatous optic neuropathy (GON) detection using ultra-widefield fundus (UWF) images.MethodsWe trained, validated and externally evaluated a deep learning system for GON detection based on 22 972 UWF images from 10 590 subjects that were collected at 4 different institutions in China and Japan. The InceptionResNetV2 neural network architecture was used to develop the system. The area under the receiver operating characteristic curve (AUC), sensitivity and specificity were used to assess the performance of detecting GON by the system. The data set from the Zhongshan Ophthalmic Center (ZOC) was selected to compare the performance of the system to that of ophthalmologists who mainly conducted UWF image analysis in clinics.ResultsThe system for GON detection achieved AUCs of 0.983–0.999 with sensitivities of 97.5–98.2% and specificities of 94.3–98.4% in four independent data sets. The most common reasons for false-negative results were confounding optic disc characteristics caused by high myopia or pathological myopia (n=39 (53%)). The leading cause for false-positive results was having other fundus lesions (n=401 (96%)). The performance of the system in the ZOC data set was comparable to that of an experienced ophthalmologist (p>0.05).ConclusionOur deep learning system can accurately detect GON from UWF images in an automated fashion. It may be used as a screening tool to improve the accessibility of screening and promote the early diagnosis and management of glaucoma.
and evaluation of a deep learning system for screening retinal hemorrhage based on ultra-widefield fundus images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.