Diabetic retinopathy (DR) is a common eye disease and a significant cause of blindness in diabetic patients. Regular screening with fundus photography and timely intervention is the most effective way to manage the disease. The large population of diabetic patients and their massive screening requirements have generated interest in a computer-aided and fully automatic diagnosis of DR. Deep neural networks, on the other hand, have brought many breakthroughs in various tasks in the recent years. To automate the diagnosis of DR and provide appropriate suggestions to DR patients, we have built a dataset of DR fundus images that have been labeled by the proper treatment method that is required. Using this dataset, we trained deep convolutional neural network models to grade the severities of DR fundus images. We were able to achieve an accuracy of 88.72% for a four-degree classification task in the experiments. We deployed our models on a cloud computing platform and provided pilot DR diagnostic services for several hospitals; in the clinical evaluation, the system achieved a consistency rate of 91.8% with ophthalmologists, demonstrating the effectiveness of our work. INDEX TERMS Diabetic retinopathy, automatic diagnosis, deep neural networks.
Background Retinopathy of prematurity (ROP) is the leading cause of childhood blindness worldwide. Automated ROP detection system is urgent and it appears to be a safe, reliable, and cost-effective complement to human experts. Methods An automated ROP detection system called DeepROP was developed by using Deep Neural Networks (DNNs). ROP detection was divided into ROP identification and grading tasks. Two specific DNN models, i.e., Id-Net and Gr-Net, were designed for identification and grading tasks, respectively. To develop the DNNs, large-scale datasets of retinal fundus images were constructed by labeling the images of ROP screenings by clinical ophthalmologists. Findings On the test dataset, the Id-Net achieved a sensitivity of 96.62%(95%CI, 92.29%–98.89%) and a specificity of 99.32% (95%CI, 96.29%–9.98%) for ROP identification while the Gr-Net attained sensitivity and specificity values of 88.46% (95%CI, 96.29%–99.98%) and 92.31% (95%CI, 81.46%–97.86%), respectively, on the ROP grading task. On another 552 cases, the developed DNNs outperformed some human experts. In a clinical setting, the sensitivity and specificity values of DeepROP for ROP identification were 84.91% (95%CI, 76.65%–91.12%) and 96.90% (95%CI, 95.49%–97.96%), respectively, whereas the corresponding measures for ROP grading were 93.33%(95%CI, 68.05%–99.83%) and 73.63%(95%CI, 68.05%–99.83%), respectively. Interpretation We constructed large-scale ROP datasets with adequate clinical labels and proposed novel DNN models. The DNN models can directly learn ROP features from big data. The developed DeepROP is potential to be an efficient and effective system for automated ROP screening. Fund National Natural Science Foundation of China under Grant 61432012 and U1435213.
Retinopathy of Prematurity (ROP) is a retinal vasproliferative disorder disease principally observed in infants born prematurely with low birth weight. ROP is an important cause of childhood blindness. Although automatic or semiautomatic diagnosis of ROP has been conducted, most previous studies have focused on "plus" disease, which is indicated by abnormalities of retinal vasculature. Few studies have reported methods for identifying the "stage" of ROP disease. Deep neural networks have achieved impressive results in many computer vision and medical image analysis problems, raising expectations that it might be a promising tool in automatic diagnosis of ROP. In this paper, convolutional neural networks (CNNs) with novel architecture is proposed to recognize the existence and severity of ROP disease per-examination. The severity of ROP is divided into mild and severe cases according to the disease progression. The proposed architecture consists of two sub-networks connected by a feature aggregate operator. The first sub-network is designed to extract high-level features from images of the fundus. These features from different images in an examination are fused by the aggregate operator, then used as the input for the second subnetwork to predict its class. A large dataset imaged by RetCam 3 is used to train and evaluate the model. The high classification accuracy in the experiment demonstrates the effectiveness of proposed architecture for recognizing ROP disease.
The current results suggest that 27G PPV system is a safe and effective treatment for various vitreoretinal diseases. When learning to perform 27G PPV, surgeons may encounter a learning curve and should gradually expand surgical indications from easy to pathologically complicated cases.
Aim To retrospectively compare the safety and effectiveness of 27-gauge (27G) microincision vitrectomy surgery (MIVS) with 25-guage (25G) MIVS for the treatment of primary rhegmatogenous retinal detachment (RRD) with silicone oil tamponade. Methods Ninety-two patients with RRD who underwent MIVS from May 1, 2015, to June 30, 2017, were included in this study. Fifty-eight eyes underwent 25G MIVS and 34 eyes underwent 27G MIVS. We analyzed the characteristics of the patients, surgical time, main clinical outcomes, and rate of complications. Results The mean surgical time was 56.7 ± 35.9 min for the 25G MIVS and 55.7 ± 36.1 min for the 27G MIVS, and there was no significant difference (P=0.894) between the two groups. The primary anatomical success rate after a single operation was 94.8% for 25G MIVS and 91.2% for 27G MIVS (P=0.666). Baseline and final visit best-corrected visual acuity (BCVA) were 1.9 ± 1.1 and 1.0 ± 0.8 in the 25G group, and 1.7 ± 1.0 and 1.1 ± 0.8 in the 27G group. Last visit BCVA increased significantly in both groups (P < 0.001). However, there were no significant differences in terms of visual improvement ratio (>0.2 logMAR) between the two groups (P=0.173). No severe intraoperative complication was observed. Iatrogenic retinal breaks occurred in 2 eyes (3.4%) in the 25G group and 1 eye (2.9%) in the 27G group during the peripheral vitreous base shaving. The transient ocular hypertension (>25 mmHg) within postoperative week 1 was 25.9% in the 25G group and 11.8% in the 27G group (P=0.120). Conclusions This study found no significant anatomical or functional difference between 27G and 25G MIVS in the treatment of primary RRD. Therefore, 27G vitrectomy appears to be a safe and effective surgery for the treatment of primary RRD.
Purpose: To evaluate the screening potential of a deep learning algorithm-derived severity score by determining its ability to detect clinically significant severe retinopathy of prematurity (ROP).Methods: Fundus photographs were collected, and standard panel diagnosis was generated for each examination by combining three independent image-based gradings. All images were analyzed using a deep learning algorithm, and a quantitative assessment of retinal vascular abnormality (DeepROP score) was assigned on a 1 to 100 scale. The area under the receiver operating curve and distribution pattern of all diagnostic parameters and categories of ROP were analyzed. The correlation between the DeepROP score and expert rank ordering according to overall ROP severity of 50 examinations was calculated.Results: A total of 9,882 individual examinations with 54,626 images from 2,801 infants were analyzed. Fifty-six examinations (0.6%) demonstrated Type 1 ROP and 54 examinations (0.5%) demonstrated Type 2 ROP. The DeepROP score had an area under the receiver operating curve of 0.981 for detecting Type 1 ROP and 0.986 for Type 2 ROP. There was a statistically significant correlation between the expert rank ordering of overall disease severity and the DeepROP score (correlation coefficient 0.758, P , 0.001). When hypothetical referral cutoff score of 35 was selected, all cases of severe ROP (Type 1 and Type 2 ROP) was captured and 8,562 eyes (87.6%) with no or mild ROP were excluded. Conclusion:The DeepROP score determined by deep learning algorithm was an objective and quantitative indicator for the severity of ROP, and it had potential in automated detecting clinically significant severe ROP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.