In recent years, with the rapid development of computer technology, continual optimization of various learning algorithms and architectures, and establishment of numerous large databases, artificial intelligence (AI) has been unprecedentedly developed and applied in the field of ophthalmology. In the past, ophthalmological AI research mainly focused on posterior segment diseases, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, retinal vein occlusion, and glaucoma optic neuropathy. Meanwhile, an increasing number of studies have employed AI to diagnose ocular surface diseases. In this review, we summarize the research progress of AI in the diagnosis of several ocular surface diseases, namely keratitis, keratoconus, dry eye, and pterygium. We discuss the limitations and challenges of AI in the diagnosis of ocular surface diseases, as well as prospects for the future.
Tear meniscus height (TMH) is an important reference parameter in the diagnosis of dry eye disease. However, most traditional methods of measuring TMH are manual or semi-automatic, which causes the measurement of TMH to be prone to the influence of subjective factors, time consuming, and laborious. To solve these problems, a segmentation algorithm based on deep learning and image processing was proposed to realize the automatic measurement of TMH. To accurately segment the tear meniscus region, the segmentation algorithm designed in this study is based on the DeepLabv3 architecture and combines the partial structure of the ResNet50, GoogleNet, and FCN networks for further improvements. A total of 305 ocular surface images were used in this study, which were divided into training and testing sets. The training set was used to train the network model, and the testing set was used to evaluate the model performance. In the experiment, for tear meniscus segmentation, the average intersection over union was 0.896, the dice coefficient was 0.884, and the sensitivity was 0.877. For the central ring of corneal projection ring segmentation, the average intersection over union was 0.932, the dice coefficient was 0.926, and the sensitivity was 0.947. According to the evaluation index comparison, the segmentation model used in this study was superior to the existing model. Finally, the measurement outcome of TMH of the testing set using the proposed method was compared with manual measurement results. All measurement results were directly compared via linear regression; the regression line was y0.98x−0.02, and the overall correlation coefficient was r20.94. Thus, the proposed method for measuring TMH in this paper is highly consistent with manual measurement and can realize the automatic measurement of TMH and assist clinicians in the diagnosis of dry eye disease.
PurposeTo assess the value of an automated classification model for dry and wet macular degeneration based on the ConvNeXT model.MethodsA total of 672 fundus images of normal, dry, and wet macular degeneration were collected from the Affiliated Eye Hospital of Nanjing Medical University and the fundus images of dry macular degeneration were expanded. The ConvNeXT three-category model was trained on the original and expanded datasets, and compared to the results of the VGG16, ResNet18, ResNet50, EfficientNetB7, and RegNet three-category models. A total of 289 fundus images were used to test the models, and the classification results of the models on different datasets were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), accuracy, and kappa.ResultsUsing 289 fundus images, three-category models trained on the original and expanded datasets were assessed. The ConvNeXT model trained on the expanded dataset was the most effective, with a diagnostic accuracy of 96.89%, kappa value of 94.99%, and high diagnostic consistency. The sensitivity, specificity, F1-score, and AUC values for normal fundus images were 100.00, 99.41, 99.59, and 99.80%, respectively. The sensitivity, specificity, F1-score, and AUC values for dry macular degeneration diagnosis were 87.50, 98.76, 90.32, and 97.10%, respectively. The sensitivity, specificity, F1-score, and AUC values for wet macular degeneration diagnosis were 97.52, 97.02, 96.72, and 99.10%, respectively.ConclusionThe ConvNeXT-based category model for dry and wet macular degeneration automatically identified dry and wet macular degeneration, aiding rapid, and accurate clinical diagnosis.
BACKGROUND Retinal vein occlusion (RVO) is the second common cause of blindness following diabetic retinopathy. Patients with RVO often develop macular edema and neovascular glaucoma, which may damage the visual function irreversibly. RVO includes macular retinal vein occlusion, central retinal vein occlusion, and branch retinal vein occlusion. OBJECTIVE Providing patients with an accurate diagnosis, followed by timely and effective treatment is very important for the prognosis of visual function. Therefore, in this paper, we use the Swin Transformer model with a label smoothing method to identify fundus images. METHODS First, 483 and 161 fundus images were used as the training set and the validation set, respectively, to train and regulate the model, whose accuracy reached 98.1%. Additional 161 fundus images were used as the test set to evaluate the model's performance. Next, the area under the receiver operating curve corresponding to macular retinal vein occlusion, central retinal vein occlusion, and branch retinal vein occlusion were obtained using the Swin Transformer model. Finally, we compared the results using the model trained by the deep convolutional neural network. RESULTS The values obtained using the Swin Transformer model for macular retinal vein occlusion, central retinal vein occlusion, and branch retinal vein occlusion were 0.9987, 0.9981, and 0.9974, respectively. The comparison results with other models indicated that the Swin Transformer model performed the best. The results of the study demonstrated that our method can automatically diagnose RVO and determine the type through fundus images, which has the potency to help in the early diagnosis of patients with RVO. CONCLUSIONS Our model can automatically diagnose RVO through fundus images, and its diagnostic accuracy is higher than that of MobileNetV2 and ResNet18. In addition, it can process data sets automatically and efficiently without manual assistance. We can not only diagnose RVO, but also accurately judge its specific type, which has an important clinical significance in real life.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.