The first version of the Retinal IMage database for Optic Nerve Evaluation (RIM-ONE) was published in 2011. This was followed by two more, turning it into one of the most cited public retinography databases for evaluating glaucoma. Although it was initially intended to be a database with reference images for segmenting the optic disc, in recent years we have observed that its use has been more oriented toward training and testing deep learning models. The recent REFUGE challenge laid out some criteria that a set of images of these characteristics must satisfy to be used as a standard reference for validating deep learning methods that rely on the use of these data. This, combined with the certain confusion and even improper use observed in some cases of the three versions published, led us to consider revising and combining them into a new, publicly available version called RIM-ONE DL (RIM-ONE for Deep Learning). This paper describes this set of images, consisting of 313 retinographies from normal subjects and 172 retinographies from patients with glaucoma. All of these images have been assessed by two experts and include a manual segmentation of the disc and cup. It also describes an evaluation benchmark with different models of well-known convolutional neural networks.
Purpose: The main objective of this study is to characterize the activation regions of three Deep Learning models using infrared images of the optic nerve of glaucoma patients.
Methods: We have retrospectively collected a sample of patients with primary and secondary open‐angle glaucoma and normal patients. The images in infrared were recorded with a spectral domain optical coherence tomography. Three previously trained models were used, VGG19, ResNet101 and the Shufflenet. Sensitivity, specificity, diagnostic accuracy in training and testing, and ROC area were calculated for all three models. The gradient‐weighted class activation map (GradCAM) was used to obtain the activation regions that highlight the most important components in the images for model prediction. The correlation coefficient between the glaucoma activation maps and between normal activation maps of the three models was calculated, and the location of the activation regions in the glaucoma and normal images was determined for each model.
Results: 639 eyes of 415 patients were collected, 432 glaucomatous and 207 normal. The accuracy of the models on the test set was 96.9% (VGG19), 95.3% (ResNet101), and 93.8% (Shufflenet). The activation maps of the ResNet101 and Shufflenet models obtained a high correlation in glaucoma (0.75) and normal (0.68) cases. For the three models, the region of interest was located mainly in the inferior temporal quadrant in 65.2% of the cases (VGG19), in 52.9% (Shufflenet) and in 54.1% (ResNet101).
Conclusions: In glaucoma eyes, the regions of interest assessed with the gradient‐weighted class activation maps of the three models analysed are located in more than half of the cases in the inferior temporal quadrant of the optic nerve.
PurposeTo determine the diagnostic generalizability of two deep learning models when trained only with images of the ganglion cell layer (GCL) of mild glaucoma.MethodsWe have collected a sample from patients with primary and secondary open‐angle glaucoma and normal patients. The sample was divided into mild glaucoma (MD≤6 dB), and moderate‐advanced (MD > 6 dB). The GCL images were recorded with a spectral‐domain Optical Coherence Tomography. Two pre‐trained models were used, the ResNet101 and the Shufflenet. The sensitivity, specificity, diagnostic precision in training and test, and the ROC area were calculated for the two models with three different training conditions according to how the images were partitioned into training and test. In the first partition, mild glaucomas were used for training and moderate‐advanced for test. In the second, moderate‐advanced glaucomas were used for training and mild for test. In the third, the whole sample was used without classifying by severity. Gradient‐weighted Class Activation Mapping (GradCAM) was used to obtain saliency maps which highlight the more important components in the images for the model prediction. The correlation coefficient between the maps of the glaucoma and normal images of the two models was calculated.Results561 eyes were collected from 337 patients, 356 are glaucomatous and 200 are normal. The precision of the models in the test set in partition 1, was 90.9% (ResNet101) and 94.2% (Shufflenet). In partition 2, was 74.4% (ResNet101) and 73.5% (Shufflenet), and in partition 3 an accuracy of 94.6% was found with both models. The correlation coefficient between the GradCAM saliency maps of the models was 0.46 for glaucoma images and 0.83 for normal images.ConclusionsThe two deep learning models are able to generalize and have high diagnostic precision if they are trained only with images of the GCL of mild glaucoma. Both models show high correlation in the GradCAM saliency maps with normal images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations –citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.