The segmentation of the retinal vascular tree presents a major step for detecting ocular pathologies. The clinical context expects higher segmentation performance with a reduced processing time. For higher accurate segmentation, several automated methods have been based on Deep Learning (DL) networks. However, the used convolutional layers bring to a higher computational complexity and so for execution times. For such need, this work presents a new DL based method for retinal vessel tree segmentation. Our main contribution consists in suggesting a new U-form DL architecture using lightweight convolution blocks in order to preserve a higher segmentation performance while reducing the computational complexity. As a second main contribution, preprocessing and data augmentation steps are proposed with respect to the retinal image and blood vessel characteristics. The proposed method is tested on DRIVE and STARE databases, which can achieve a better trade-off between the retinal blood vessel detection rate and the detection time with average accuracy of 0.978 and 0.98 in 0.59s and 0.48s per fundus image on GPU NVIDIA GTX 980 platforms, respectively for DRIVE and STARE database fundus images.
The smartphone code of the ONH detection algorithm was applied to the STARE and DRIVE databases resulting in about 96% and 100% detection rates, respectively, with an average execution time of about 2 s and 1.3 s. In addition, two other databases captured by the d-Eye and iExaminer snap-on lenses for smartphones were considered resulting in about 93% and 91% detection rates, respectively, with an average execution time of about 2.7 s and 2.2 s, respectively.
This paper presents the real-time implementation of deep neural networks on smartphone platforms to detect and classify diabetic retinopathy from eye fundus images. This implementation is an extension of a previously reported implementation by considering all the five stages of diabetic retinopathy. Two deep neural networks are first trained, one for detecting four stages and the other to further classify the last stage into two more stages, based on the EyePACS and APTOS datasets fundus images and by using transfer learning. Then, it is shown how these trained networks are turned into a smartphone app, both Android and iOS versions, to process images captured by smartphone cameras in real-time. The app is designed in such a way that fundus images can be captured and processed in real-time by smartphones together with lens attachments that are commercially available. The developed real-time smartphone app provides a costeffective and widely accessible approach for conducting first-pass diabetic retinopathy eye exams in remote clinics or areas with limited access to fundus cameras and ophthalmologists. Keywords: Real-time implementation of deep neural networks on smartphones, real-time smartphone app for detection and classification of diabetic retinopathy, first-pass eye exam by smartphone app.
Several retinal pathologies lead to severe damages that may achieve vision lost. Moreover, some damages require expensive treatment, other ones are irreversible due to the lack of therapies. Therefore, early diagnoses are highly recommended to control ocular diseases. However, early stages of several ocular pathologies lead to the symptoms that cannot be distinguish by the patients. Moreover, ageing population is an important prevalence factor of ocular diseases which is the cases of most industrial counties. Further, this feature involves a lake of mobility which presents a limiting factor to perform periodical eye screening. Those constraints lead to a late of ocular diagnosis and hence important ocular pathology patients are registered. The forecast statistics indicates that affected population will be increased in coming years. Several devices allowing the capture of the retina have recently been proposed. They are composed by optical lenses which can be snapped on Smartphone, providing fundus images with acceptable quality. Thence, the challenge is to perform automatic ocular pathology detection on Smartphone captured fundus images that achieves higher performance detection while respecting timing constraint with respect to the clinical employment. This paper presents a survey of the Smartphonecaptured fundus image quality and the existing methods that use them for retinal structures and abnormalities detection. For this purpose, we first summarize the works that evaluate the Smartphone-captures fundus image quality and their FOV (field-of-view). Then, we report the capability to detect abnormalities and ocular pathologies from those fundus images. Thereafter, we propose a flowchart of processing pipeline of detecting methods from Smartphone captured fundus images and we investigate about the implementation environment required to perform the detection of retinal abnormalities.
Multidimensional Retiming (MR) is a software pipelining approach that ensures increasing the instruction-level parallelism across all the nested loops. All the MR techniques aim at achieving a full parallelism in order to schedule applications with a minimal cycle period. However, the growth of code sizes in terms of parallelism level engenders the rise in cycle period numbers. Thus, fully parallel multidimensional applications frequently face limiting factors when implemented on real-time systems.This paper presents a novel technique, called delayed MR, which schedules nested loops with a minimal cycle period, without achieving full parallelism. It is formulated into two efficient steps whose first one sweeps the nested loops with the target of selecting and ordering paths, whereas the second one applies an optimal MR to the selected paths. Our technique is verified by implementing several nested loops in NVIDIA architectures. The experimental results show that our technique achieves average improvements on execution time of 32.8% compared to the incremental technique and 19.35% compared to the chained one.
Ocular pathology detection from fundus images presents an important challenge on health care. In fact, each pathology has different severity stages that may be deduced by verifying the existence of specific lesions. Each lesion is characterized by morphological features. Moreover, several lesions of different pathologies have similar features. We note that patient may be affected simultaneously by several pathologies. Consequently, the ocular pathology detection presents a multiclass classification with a complex resolution principle. Several detection methods of ocular pathologies from fundus images have been proposed. The methods based on deep learning are distinguished by higher performance detection, due to their capability to configure the network with respect to the detection objective. This work proposes a survey of ocular pathology detection methods based on deep learning. First, we study the existing methods either for lesion segmentation or pathology classification. Afterwards, we extract the principle steps of processing and we analyze the proposed neural network structures. Subsequently, we identify the hardware and software environment required to employ the deep learning architecture. Thereafter, we investigate about the experimentation principles involved to evaluate the methods and the databases used either for training and testing phases. The detection performance ratios and execution times are also reported and discussed.
NeoVascularization (NV) occurs in the Proliferative Diabetic Retinopathy (PDR) stage, where the development progress of new vessels presents a high risk for severe vision loss and blindness. Therefore, early NV detection is primordial to preserve patient's vision. Several automated methods have been proposed to detect the NV on retinograph-captured fundus images. However, their employment is constrained by the reduced ophthalmologist-per-person ratio and the expensive equipment for image capturing. This paper presents a novel method for NV detection in smartphone-captured fundus images. The implementation of the method on a smartphone having an optical lens for fundus capturing leads to a Mobiles-Aided-Screening system of PDR (MAS-PDR). The challenge is to ensure accurate and robust detection even with moderate quality of fundus image, on reduced execution time.Within this objective, we identify the major criteria of neovascularized vessels which are tortuosity, width, bifurcation, and density. Our main contribution consists in proposing a sharp feature to reflect each criterion on reduced computational complexity processing. Therefore, the features are provided to a random forest classifier to deduce the PDR stage.A dataset raised from publicly databases is used on a 10-cross validation process where 98.69% accuracy, 97.73% sensitivity, and 99.12% specificity are achieved. To evaluate the robustness, the same experimentation is repeated after applying motion blur filters to the fundus image dataset, where 98.91% accuracy, 96.75% sensitivity, and 100% specificity are deduced. Moreover, NV screening is performed under 3 s when executed in smartphone devices demonstrating the appropriateness of our method to MAS-PDR.
The World Health Organization (WHO) estimates that 285 million people are visually impaired worldwide, with 39 million blinds. Glaucoma, Cataract, Age-related macular degeneration, Diabetic retinopathy are among the leading retinal diseases. Thus, there is an active effort to create and develop methods to automate screening of retinal diseases. Many Computer Aided Diagnosis (CAD) systems for ocular diseases have been developed and are widely used. Deep learning (DL) has shown its capabilities in field of public health including ophthalmology. In retinal disease diagnosis, the approach based upon DL and convolutional neural networks (CNNs) is used to locate, identify, quantify pathological features. The performance of this approach keeps growing. This chapter, addresses an overview of the used methods based upon DL and CNNs in detection of retinal abnormalities related to the most severe ocular diseases in retinal images, where network architectures, post/preprocessing and evaluation experiments are detailed. We also present some related work concerning the Deep Learning-based Smartphone applications for earlier screening and diagnosisof retinal diseases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.